Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Governing Mis/Disinformation (traditional panel)
Time:
Thursday, 31/Oct/2024:
9:00am - 10:30am

Session Chair: Monika Fratczak
Location: Octagon Council Chamber

80 attendees

Show help for 'Increase or decrease the abstract text size'
Presentations

GOVERNING FROM BLACK TO WHITE: DISINFORMATION IN NUCLEAR EMERGENCIES

Seungtae Han, Brenden Kuerbis, Amulya Panakam

Georgia Institution of Technology, United States of America

Our research delves into the impact of disinformation in emergencies (DiE), specifically within the context of nuclear emergency responses, and the dynamics of the political economy behind it. Key questions guiding our investigation include suitable assessment techniques for DiE detection and evaluation and effective governance responses.

Our research employs established communications theories of propaganda, emphasizing propaganda analysis as a tool for understanding DiE and developing institutional responses. We examine two nuclear emergency cases—the Fukushima Daiichi Nuclear Power Plant (FNPP) disaster and the occupation of the Zaporozhzhian Nuclear Power Plant (ZNPP) in Ukraine—to uncover the disruptive impact of disinformation on emergency communication.

Through the categorization of propaganda into black, gray, and white types, we analyze tactics employed in DiE, shedding light on strategic intent, transparency, and veracity. Case studies reveal instances of false narratives propagated by governments and media channels, influencing public perception and exacerbating tensions. While our study has not observed significant AI-enabled DiE, we highlight its use in DiE identification. However, state-led counter-disinformation initiatives face challenges, including jurisdictional issues and calls to protect free expression.

We posit the necessity of non-state-led networked governance structures, drawing parallels with successful cybersecurity governance models. These frameworks, informed by interdisciplinary insights and operating independently from states, are primed to address the multifaceted challenges posed by DiE. Addressing participatory, structural, and operational impediments within existing content moderation governance mechanisms emerges as a pivotal imperative for the realization of effective strategies.



Governing and defining misinformation: A longitudinal study of social media platforms policies

Christian Katzenbach1, Daria Dergacheva1, Vasilisa Kuznetsova1, Adrian Kopps2

1University of Bremen, Germany; 2Alexander von Humboldt Institute for Internet and Society, Berlin, Germany

This study explores how the governance and conceptualization of misinformation by five major social media platforms (YouTube, Facebook, Instagram, X/Twitter, TikTok) has changed from their inception until the end of 2023. Applying a longitudinal mixed-method approach, the paper traces the inclusion of different types of misinformation into the platforms' policies and examines periods of convergence and divergence in their handling of misinformation.

The study identifies an early topical focus on spam and impersonation, with a notable shift towards political misinformation in the 2010s. Additionally, it highlights significant inter-platform differences in addressing misinformation, which illustrates the fluid nature of definitions of misinformation, as well as the influence of external incidents (elections, conflicts, COVID-19) and regulatory, societal, and technological developments on policy changes.



The dark side of LLM-powered chatbots: misinformation, biases, content moderation challenges in political information retrieval

Joanne Kuai1, Cornelia Brantner1, Michael Karlsson1, Elizabeth Van Couvering1, Salvatore Romano2

1Karlstad University, Sweden; 2Universitat Oberta de Catalunya, Spain

This study investigates the impact of Large Language Model (LLM)-based chatbots, specifically in the context of political information retrieval, using the 2024 Taiwan presidential election as a case study. With the rapid integration of LLMs into search engines like Google and Microsoft Bing, concerns about information quality, algorithmic gatekeeping, biases, and content moderation emerged. This research aims to (1) assess the alignment of AI chatbot responses with factual political information, (2) examine the adherence of chatbots to algorithmic norms and impartiality ideals, (3) investigate the factuality and transparency of chatbot-sourced synopses, and (4) explore the universality of chatbot gatekeeping across different languages within the same geopolitical context.

Adopting a case study methodology and prompting method, the study analyzes responses from Microsoft’s LLM-powered search engine chatbot, Copilot, in five languages (English, Traditional Chinese, Simple Chinese, German, Swedish). The findings reveal significant discrepancies in content accuracy, source citation, and response behavior across languages. Notably, Copilot demonstrated a higher rate of factual errors in Traditional Chinese while exhibiting better performance in Simplified Chinese. The study also highlights problematic referencing behaviors and a tendency to prioritize certain types of sources, such as Wikipedia, over legitimate news outlets.

These results underscore the need for enhanced transparency, thoughtful design, and vigilant content moderation in AI technologies, especially during politically sensitive events. Addressing these issues is crucial for ensuring high-quality information delivery and maintaining algorithmic accountability in the evolving landscape of AI-driven communication platforms.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AoIR2024
Conference Software: ConfTool Pro 2.6.151
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany