Abstract
Trust is fundamental to archives. Users expect that archives will maintain the integrity of records over time, and provide transparent information about the authenticity and origin of archival materials. The value and legitimacy of archives is greatly diminished if integrity and authenticity are not assured.
Deceptive generative AI presents a new threat to the trust that archives both generate and rely on. As generative AI is increasingly integrated throughout the digital content lifecycle, archival collections may become accidentally or maliciously polluted with unlabeled manipulated content. Archives may find themselves unwittingly misrepresenting synthetic fake content as real, gradually eroding the reliability of their collections as a whole. In a worst-case scenario, archives’ hard-earned trust could even be weaponized by bad actors in disinformation campaigns in order to re-write historical narratives.
WITNESS, a global nonprofit organization that helps people use video and technology to protect and defend human rights, has been engaged for years on foundational research and advocacy on synthetic media. Since 2018, WITNESS has led a ‘Prepare, Don’t Panic’ approach to synthetic media, deepfakes, and multimodal generative AI. In consultation with human rights defenders, journalists, and technologists on four continents, we have been preparing for the impact of AI on our ability to discern the truth, identifying the most pressing concerns and recommendations on what we must do now. As part of this work, we have advocated for innovative and effective AI detection tools to foster a trustworthy and resilient information ecosystem.
Alongside disclosure mechanisms, AI detection tools are key in tackling AI-generated deception. AI detection consists of methods of identifying unnatural patterns that differentiate synthetic media from non-synthetic media in order to determine the likelihood that a piece of content was AI-generated/ modified. Post-hoc detection has a critical role in providing real-time crisis mitigation, protecting information credibility, exposing manipulation, advancing media literacy, and restoring trust in authentic media. Such safeguarding mechanisms contribute to secure and inclusive digital information ecosystems that can effectively respond to emerging threats to democracy and human rights.
One of our strategic goals is to expand access to AI detection training, and to equip activists, journalists, fact-checkers, and other human rights defenders with essential knowledge to use AI detection tools effectively and responsibly, particularly during high-stakes moments.
Towards this end, we have developed this basic 1-hour workshop intended to be adapted and delivered by our team in localized contexts across all of the regions where we work. The workshop will include real-life examples analyzed by our Deepfake Rapid Response Force to illustrate the effectiveness and challenges of AI detection. For Fantastic Futures, we propose to present this workshop to share our knowledge about the global landscape of deceptive AI and existing technical solutions, but also to seek insights and feedback from the AI4LAM community about the workshop content and approach. We hope this engagement can be part of an ongoing dialogue between LAM and human rights technology communities to devise practical and accessible ways to fortify societies against the harms of deceptive generative AI.
Resources Required
Nkem Agunwa and Joojo Cobbinah, AI, Disinformation and the Battle for Truth: How Ghana’s 2024 elections exposed the new age of political deception (2025) WITNESS Blog https://blog.witness.org/2025/03/disinformation-ghana-2024-election/
Sam Gregory, Pre-Empting a Crisis: Deepfake Detection Skills + Global Access to Media Forensics Tools (2021) WITNESS Blog https://blog.witness.org/2021/07/deepfake-detection-skills-tools-access/
Things to Know Before Using AI Detection Tools (2025) WITNESS Tipsheet https://library.witness.org/product/things-to-know-before-using-ai-detection-tools/
Knowledge in Advance
No specialized knowledge is required.
Planned Outcomes
Participants will gain a deeper understanding of real-world usages of deceptive AI and limitations of AI detection tools, in particular of how these tools perform across various global contexts and content types. It will be an opportunity to examine the global landscape of the use of deceptive AI and existing technical solutions, and to identify the critical gaps in access to and the usability of detection tools. Attendees will explore pathways toward more effective, accessible and sustainable AI detection interventions.
About the Instructors
Yvonne (she/her) is an audiovisual archivist with over 15 years of experience working at the intersection of human rights, video and technology, and archives. As Senior Program Manager of Archives at WITNESS, she collaborates with regional teams to support partners to preserve human rights video, and develops accessible guidance and training materials related to archiving and preservation. Yvonne holds an M.A in Moving Image Archiving and Preservation from NYU. In 2024, Yvonne received the Alan Stark Award from the Association of Moving Image Archivists in recognition of her contributions to the work of moving image archives and AMIA.
Zuzanna (she/her) a human rights researcher and advocate interested in the intersection of human rights, AI and emerging technologies. At WITNESS, she supports the work on synthetic media detection as a Program Consultant. Prior to joining WITNESS, she worked on the AI and Human Rights project based at QMUL where she focused on the impact of emerging technologies in policing on human rights and the long-term societal dimension of AI-related harm. She also has experience working on open-source human rights investigations as a former member of Amnesty International’s Digital Verification Corps.