A European project is calling on journalists, fact-checkers, researchers, and policymakers to test its new AI platform designed to strengthen trust in information.
The AI4TRUST project, funded by the European Union under the Horizon “AI to fight disinformation” programme, is launching the pilot phase of its AI4TRUST Platform MVP - a prototype integrating artificial intelligence with human expertise to better detect, understand, and respond to disinformation online.
The project’s consortium, which includes media organisations, research institutes, and technology partners from across Europe, is now inviting professionals to participate in testing the platform and share their feedback. The goal: to ensure that AI4TRUST’s tools are ethical, transparent, and truly useful to those working on the front lines of information integrity.
“This pilot is not about showcasing finished technology — it’s about collaboration,” said the AI4TRUST coordination team. “We are asking professionals to help us refine the tools so that they genuinely serve the needs of journalists, fact-checkers, and policymakers.”
An AI platform built for — and with — professionals
The AI4TRUST Platform brings together 15 experimental tools that combine AI-based monitoring and content analysis with human oversight. These include systems for identifying potentially misleading claims, detecting deepfakes, analysing online narratives, and tracking disinformation domains.
Participants in the pilot will test different sets of tools depending on their professional background:
- Journalists will explore content analysis and monitoring features designed to support verification workflows.
- Fact-checkers will test the full range of tools, providing in-depth feedback on usability and relevance.
- Policy makers and researchers will focus on tools that analyse disinformation patterns and signals.
Testing is expected to take between 1 and 3.5 hours, depending on the user profile, followed by a feedback questionnaire and optional focus group participation.
A collaborative European effort
AI4TRUST’s approach is rooted in collaboration between human expertise and artificial intelligence. By collecting structured feedback from real users, the project aims to improve the platform’s design, usability, and ethical safeguards before its public release.
The pilot is part of a broader European effort to make AI systems trustworthy, transparent, and aligned with democratic values, contributing to the EU’s strategy to strengthen resilience against disinformation.
How to participate
Professionals interested in joining the pilot can register through a short form on the AI4TRUST website:
After registration, participants will receive instructions to access the testing environment and provide their feedback.
“This is a chance to help shape how Europe uses AI to protect the integrity of information — not just as a user, but as a co-creator,” the project coordination team said.
About AI4TRUST
AI4TRUST is a project funded by the European Union’s Horizon Europe research and innovation programme (Grant Agreement No. 101070190). It brings together a consortium of European media, research, and technology partners to develop a trust-based environment that integrates automated monitoring of social and news media with advanced AI-based tools, enhancing the work of human fact-checkers and journalists.