Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
P33: Misinformation 1
Time:
Thursday, 19/Oct/2023:
8:30am - 10:00am

Session Chair: Pawel Popiel
Location: Wyeth A

Sonesta Hotel

Show help for 'Increase or decrease the abstract text size'
Presentations

The infrastructural power of programmatic advertising networks: analyzing disinformation industries in Brazil

Marcelo Alves Dos Santos JR1, Carlos D'Andrea2

1Pontifical Catholic University of Rio de Janeiro, Brazil; 2Federal University of Minas Gerais, Brazil

The informational disorder that sprawls through multiple state-nations, exacerbated by violent uprisings and coup attempts such as the US Capitol Storm on January 6th, 2021 and the Brazilian coup attempt on January 8th, 2023, has shed new light on the political economy of disinformation industries. In particular, it brings to the forefront the problem of economic incentives for creating and spreading disinformation. This paper builds on critical platform and infrastructure research literature for analyzing the multilateral infrastructural power of programmatic advertising networks. Our research questions are: how is power exercised by programmatic advertising infrastructures while managing its multilateral relationships? In terms of technicities, governance and business models, how does these infrastructures enable or reinforce desinformation disorder? This case study draws on a multi-methodological approach, combining digital methods research and critical analysis of platform documents. The empirical data obtained has 95.269 ads collected on the website (data scraped with a Python script developed by one of the authors) during the election month. Empirical data show that MGID, a native advertising platform, placed 54% of the advertisements on Terra Brasil Notícias. Google Ads was the second largest provider of digital ads on TBN, despite its policy`s restrictions on sellers that host unreliable or harmful content on issues such as health, climate, elections and democracy. Findings from the Brazilian case also contribute to understanding the infrastructural power of big tech governing the monetization of publishers in the Global South.



‘BATTLING’ BAD ACTORS OR ‘INOCULATING’ AGAINST FALSITY? A POLICY ANALYSIS OF THE PROBLEM REPRESENTATIONS OF MISINFORMATION IN AUSTRALIA

Nadia Jude

Queensland University of Technology, Australia

Misinformation has been described as one of the defining problems of our time (Freelon & Wells, 2020). There has been a rapid and global rise in action to address the problem, largely since the 2016 United States presidential election and the United Kingdom Brexit referendum (Kreiss, 2021). In Australia, there are fact-checking initiatives like ABC-RMIT Fact Check and a regulatory code incentivising major social media platforms to 'combat’ the problem. Media, health, and public sector organisations also take a variety of approaches to addressing the problem of misinformation, from pre-bunking strategies to media literacy initiatives. 

When observed together, these efforts indicate an increasingly stabilised and well-funded institutional environment around the problem of misinformation in Australia, elsewhere broadly defined as ‘fighting fakes’ (Bélair-Gagnon et al., 2022). This ‘anti-misinformation’ institutional landscape not only includes actors such as fact-checkers and news organisations, but also technology companies, government agencies, academic institutions, and community organisations working to address the problem, each with their own governing practices, professional spheres, institutional logics, and meso-level relationships. 

This paper seeks to critically examine different representations of the misinformation problem in Australia, across institutional actors and over time. It is guided by two research questions: 

1. How has the problem of misinformation been represented in Australia by key institutional actors (regulators, governments, companies, media, academia and community organisations) since 2016? 

2. What are the implications of certain problem representations, particularly those that have risen to prominence and asserted dominance in Australia?  



RECOVERING MISINFORMATION’S MISSING CHILDREN: APPROPRIATING REANALYSIS FOR SELF-REFLEXIVITY IN CRITICAL MIS/DISINFORMATION STUDIES

Izzi Grasso, Anna Lauren Hoffmann

University of Washington

In the present study, we confront the problems of perspective and normative grounding that attend critical theory within the context of a research project on digital information seeking, conspiracy groups, and anti-trafficking and anti-child exploitation advocacy. In doing so, we seek highlight the necessity of self-reflexivity and self-critique for for emergent projects of critical mis/disinformation studies.



Revealing coordinated image-sharing in social media: A case study of pro-Russian influence campaigns

Guangnan Zhu1, Timothy Graham1, Daniel Whelan-Shamy1, Robert Fleet2

1Digital Media Research Centre, Queensland University of Technology, Australia; 2Digital Observatory, Queensland University of Technology, Australia

Coordinated online disinformation campaigns are by nature difficult to detect. In response, communication scholars have developed a range of methods and analytical frameworks to discover and analyse disinformation campaigns. The use of social network analysis (SNA) to find and map coordinated behavioural patterns has become increasingly popular and demonstrated effective results. However, these methods are designed for text and behavioural but miss an important aspect of disinformation campaigns: coordinated image-sharing. This paper examine this gap by analysing a large-scale dataset of tweets using advanced SNA to map coordinated retweeting behaviour and coordinated image-sharing. We show that coordinated image-sharing is both more widespread and different in structure to other forms of coordination. This is important because it highlights a major gap in research, where computational methods are not suited to detecting and analysing the scale and scope of visual disinformation on platforms like Twitter. To address this, we suggest new methods to complement existing approaches, using machine learning to detect image similarity. The paper concludes with a reflection of limitations and suggestions for the next steps.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AoIR 2023
Conference Software: ConfTool Pro 2.6.149
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany