Conference Agenda
Session | ||
Disinformation, Elections and Politics - Translation
| ||
Presentations | ||
Wikipedia Edits Before Elections: Analyzing Strategic Changes in Political Representation University of Amsterdam, Netherlands, The Wikipedia is a widely accessed and trusted source of political information, shaping public perceptions and electoral outcomes. Its open-editing model, while enabling broad participation, also raises concerns about strategic modifications to politicians’ pages, particularly before elections. This study examines how and when Wikipedia pages about U.S. legislators are edited in the lead-up to elections, comparing patterns across English and Spanish editions. Prior research has documented cases where political staffers and affiliates edit Wikipedia pages to present candidates favorably. However, the extent, timing, and impact of such edits—especially in multilingual contexts—remain underexplored. This study analyzes nearly two decades of Wikipedia revisions, election-related article snapshots, and page traffic data to assess whether editing patterns reflect strategic efforts to influence voter perceptions. It investigates spikes in activity, the types of edits made, and the role of political context, such as race competitiveness and constituency demographics, in shaping editorial behavior. By shedding light on how political Wikipedia pages evolve around elections, this research contributes to discussions on digital governance, transparency, and the reliability of online political information. Given Wikipedia’s role in AI training, search engine rankings, and misinformation detection, understanding these editorial dynamics is essential for maintaining the integrity of public knowledge in democratic societies. BEYOND DISINFORMATION: ANALYZING “CHEAPFAKES” DURING LULA’S HOSPITALIZATION ON X 1FGV Comunicação Rio, Brazil; 2UFF, Brazil This article examines the use of "cheapfakes" in Brazil, emphasizing how these low-tech manipulations of images and videos extend beyond misinformation to ridicule political figures and discredit opponents. Unlike deepfakes, which involve AI-generated forgeries, cheapfakes rely on simpler edits like speed changes, cropping, and recontextualization. The hypothesis is that, while manipulated visuals have historically been used to spread disinformation, they are also increasingly employed to ridicule political figures, amplify engagement, or discredit real images. This research analyzes cheapfakes and deepfakes on X (formerly Twitter) during President Lula’s hospitalization for surgery, based on an initial collection of 19,700 posts published on the topic on X between December 10 and 14, 2024. The event triggered widespread manipulation, including fake images of the president with a head bandage and altered videos portraying him in absurd situations. Real footage was also dismissed as "fake" to sustain conspiracy theories. The study examines three dimensions: (a) Cheapfakes as memes—mocking rather than deceiving; (b) Cheapfakes as disinformation—misleading the public; and (c) Cheapfakes as conspiracy tools—reinforcing claims like Lula having a body double. This research highlights how cheapfakes shape digital political engagement. Political misinformation across 260 countries and 5 social media platforms 1University of Amsterdam, Netherlands, The; 2Vrije Universiteit Amsterdam, Netherlands, The Misinformation has emerged as one of the most pressing challenges of our time. Despite growing alarm, the nature and origins of misinformation remain subject of intense academic debate. While early research attributed its rise to a general decline in the quality of information resulting from the emergence of digital media, more recent work instead points to the central role played by political elites – who strategically spread and employ misinformation for political gains. Yet, despite increasing recognition of the entanglement between misinformation and party politics, empirical research on when and why political elites engage in misinformation campaigns remains scarce. This paper takes a comparative politics approach to misinformation by analyzing an unprecedented global database of political party communications across 224 countries, 3600 parties, and five major social media platforms—Twitter/X, TikTok, YouTube, Instagram, and Facebook. The study employs large language models for misinformation detection and integrates findings with established country- and party-level datasets such as V-Dem, ParlGov, and the Chapel Hill Expert Survey. Using multilevel modeling, the paper examines how party characteristics, country-level factors, and platform-specific affordances influence the spread of misinformation. The study addresses four key research questions: (1) How do party characteristics shape misinformation dissemination? (2) How do country-level factors influence misinformation prevalence? (3) How do platform architectures affect misinformation visibility? (4) How do interactions between party, country, and platform dynamics shape broader misinformation trends? The findings provide new insights into the political drivers of misinformation and the role of digital platforms in the post-truth era. THE INTERACTION BETWEEN PUBLIC AND FACT-CHECKING CONTENT: THE PERCEPTION OF LUPA’S COMMENTERS ABOUT POLITICAL DEBATES DURING 2022 PRESIDENTIAL ELECTION IN BRAZIL Unversidade Federal do Paraná, Brazil This study examines how citizens engage with fact-checked content provided by fact-checking agencies. Specifically, it focuses on fact-checks conducted by the Lupa agency regarding the debates held during the 2022 Brazilian presidential elections. Lupa is one of the first fact-checking agencies in Brazil. Founded in 2015, it is a member of the International Fact-Checking Network (IFCN), a global organization affiliated with the Poynter Institute in the United States. Comments from three fact-checking posts were collected from the agency’s official Instagram profile. In total, 1,222 comments from posts analyzing the debates broadcast by SBT, Band, and Globo were examined. A textual analysis was conducted using the Iramuteq software, employing similarity analysis, Reinert classification and word cloud generation. The findings indicate that discussions in the comment section leverage candidates’ statements and their fact-checks to debate issues that extend beyond the content published. Additionally, users criticize Lupa’s fact-checking process, highlighting areas that should be improved in the verification and dissemination of information. Some comments also reference the labels used by the agency to classify content. As the analyses reveal, commenters frequently question and challenge these labels used to determine the truthfulness of statements. In the next phase of the research, a categorical content analysis will be conducted to gain a more detailed understanding of user interactions. |