
AI-CODE Project Launches User Testing for Next-Generation Tools to Strengthen Trust and Combat Disinformation
Date
Brussels, October 2025
The AI-CODE Project is taking a major step forward in tackling disinformation and promoting trustworthy journalism. From mid-October to early November, AI-CODE will host a series of hands-on user testing and co-creation sessions designed to refine a new suite of AI-based tools that empower journalists, fact-checkers, and researchers to verify content, assess credibility, and foster transparency in digital media.
The initiative aims to develop innovative AI solutions that enhance media freedom and strengthen resilience against the challenges posed by generative AI. Participants in the sessions will explore and help shape a range of cutting-edge tools, including:
Trustability and Credibility Assessment Service: A prototype offering clear overviews of credibility signals for posts on fediverse platforms such as Mastodon and Bluesky. This tool supports media professionals in evaluating trustworthy content quickly and effectively.
Media Asset Annotation and Management (MAAM): An interactive platform for detecting visual disinformation through AI-driven image analysis, similarity search, geolocation, and annotation. It enables users to organise, manage, and verify multimedia content.
Transparency Services for AI Model Cards: A tool that documents AI models’ purpose, performance, risks, and limitations, promoting responsible AI use through clear and accessible transparency measures.
PromptED: A hands-on simulator that helps journalists understand how large language models (LLMs) work and how to write effective, responsible prompts aligned with journalistic standards.
Personal Companion for Countering Disinformation: A new system designed to help users identify false claims, logical fallacies, and hate speech, supporting accurate reporting and critical evaluation of content.
Disinformation Detection Service: Comprising the Fediverse Explorer and Mastodon Instance Inspector, this toolset provides structured insights into social media ecosystems, assessing safety, diversity, and community engagement through transparent data analysis.
Through these co-creation workshops, participants will test usability, clarity, and professional relevance, providing direct feedback to guide further development. The sessions form part of AI-CODE’s mission to strengthen trust, transparency, and resilience in the digital information environment.
“In an era of rapid technological change, it is vital that journalists and citizens alike can rely on tools that make transparency and truth more accessible,” said a representative of the AI-CODE Project. “These sessions ensure our solutions are built not just for the media, but with the media.”
The AI-CODE Project continues to position itself at the forefront of AI innovation for trustworthy information ecosystems, supporting a free, informed, and democratic media landscape.
For more information and details on how to take part, visit the AI-CODE Project website.
