Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
AI Challenges
Time:
Friday, 17/Oct/2025:
2:00pm - 3:30pm

Session Chair: Jullena Santos de Alencar Normando
Location: Room 7a - Groundfloor

Novo IACS (Instituto de Arte e Comunicação Social) São Domingos, Niterói - State of Rio de Janeiro, 24210-200, Brazil

Show help for 'Increase or decrease the abstract text size'
Presentations

DESPICABLE THEREFORE DEMONETIZED: HOW ANTI-MAINSTREAM ICONS SOLICIT SUPPORT

Elisabetta Zurovac, Giovanni Boccia Artieri, Stefano Brilli

Università degli Studi di Urbino "Carlo Bo", Italy

This study investigates the rise of self-fashioned “anti-mainstream public figures” within Italy’s Telegramsphere, exploring their pivotal role between fringe platforms, social media platforms and legacy media. Part of a broader research project on narrative influences destabilizing Italy’s media ecosystem, the paper centers on how these personalities craft frames of persecution to establish themselves as credible outsiders defying mainstream norms. Literature has suggested how fringe platforms like Telegram - prized for privacy and security - serve as incubators where these figures cultivate tight-knit communities and amplify disinformation including conspiracy narratives and alt-right coordination. Using an ethnographic approach paired with digital methods, the study mapped 570 Telegram channels, starting from 24 blacklisted seeds expanded via Similar Channels Telegram API. Early findings highlight how these personalities reframe mainstream condemnation and demonetization as badges of authenticity, wielding populist anti-elitist rhetoric to bolster their legitimacy. They transform monetization efforts – donations, premium content – into acts of resistance, fostering solidarity in emotional echo chambers (Eslen-Ziya, 2019). Far from being sidelined, these figures leverage their outsider status to maintain significant visibility, influencing discourse beyond fringe spaces. By focusing on their strategic self-presentation, this research illuminates the interplay of morality, visibility, and monetization in digital public spheres, revealing how these personalities blur the lines between fringe and mainstream while reshaping Italy’s media dynamics.



THE WORLD WE SEE THROUGH AI’S EYES: U.S. CULTURAL DOMINANCE IN TEXT-TO-IMAGE GENERATION

Aleksandra Urman1, Joachim Baumann1,4, Elsa Lichtenegger1, Azza Bouleimen1, Robin Forsberg1,2, Corinna Hertweck1,4, Desheng Hu1, Stefania Ionescu3, Kshitijaa Jaglan1, Salima Jaoua1, Nicolo Pagan1, Aniko Hannak1

1University of Zurich, Switzerland; 2University of Helsinki, Finland; 3ETH Zurich, Switzerland; 4ZHAW, Switzerland

This study examines cultural imperialism in text-to-image (T2I) generative AI models by evaluating how these systems represent diverse cultural contexts. We analyze whether T2I models like DALL-E 3 reproduce biases by overrepresenting U.S. cultural norms while underrepresenting or distorting non-U.S. perspectives. Our methodology employs 280 prompts across 14 variable domains of life, translated into 15 languages, corresponding to 30 national/cultural contexts. To ensure cultural sensitivity, researchers from 12 different national backgrounds (including 7 from the Global South or East) collaboratively developed these prompts. We introduced country-specific references to test cultural adaptability, comparing generic prompts (e.g., "a living room") with location-specific ones (e.g., "a living room in Italy").

Our preliminary results using CLIP embeddings and cosine similarity measurements reveal that DALL-E 3 consistently defaults to U.S.-centric imagery when processing country-neutral prompts, regardless of the prompt language. Images generated from the non-location-specific prompts across all languages demonstrate stronger similarity to explicitly U.S.-referenced images than to any other national context. This finding supports the cultural imperialism hypothesis, indicating T2I models systematically encode and reproduce U.S.-American hegemonic influence in digital cultural representations. Our ongoing work includes human annotation to assess cultural accuracy and stereotyping across different contexts, with the hypothesis that representations of non-Western contexts will demonstrate lower cultural accuracy and higher stereotyping compared to Western contexts.



What is Labor in an Age of Generative AI: Reading Privacy and Copyright Lawsuits Against the Grain

Bianca Zamora Perez

University of Pennsylvania, United States of America

U.S. civil society is legally pushing back against the subsumption of data in mass for generative AI (genAI). This paper will explore the two avenues of legal redress (1) copyright law and (2) consumer privacy. In 2023 visual artists were among the first to file their copyright lawsuits against genAI companies, like StabilityAI, for copyright infringement in the form of image generator outputs. Later, prominent comedian Sarah Silverman filed a copyright lawsuit against OpenAI and Meta claiming that the companies trained their genAI products on her book without her permission or compensation while internet users in California were suing OpenAI for training their models on stolen private information including photographs. While copyrighted works are more clearly understood as the products of labor, I integrate consumer privacy to understand how the cultural fruits that Terranova (2000) argues are the products of free labor have been sidelined to create data hierarchies. While current literature has framed genAI around the “copyright dilemma” (Kuai, 2024), underlying all three cases is another critical question ripe for theoretical intervention: whose data is considered labor, and how is labor compensated for in an age of genAI? By integrating copyright lawsuits presented by visual artists and book authors with a class action lawsuit that relates to consumer privacy, this paper blends surveillance, copyright law, and labor to understand what is labor in an age of genAI. I argue these lawsuits strategically dissociate “data” from the user who created, posted, or circulated it.



TRUSTING CHATGPT: HOW MINOR TWEAKS IN THE PROMPTS LEAD TO MAJOR DIFFERENCES IN SENTIMENT CLASSIFICATION

Jaime Cuellar, Oscar Moreno-Martínez

Pontificia Universidad Javeriana, Colombia

A central question in social sciences today is how reliable complex predictive models like ChatGPT are. This study tests whether subtle changes in prompt structure affect sentiment analysis results by the GPT-4o mini model. Using 100,000 Spanish comments on four Latin American presidents, the model classified the comments as positive, negative, or neutral in 10 trials with slight variations in the prompts. The analysis revealed that even minor changes in prompt wording, syntax, or structure significantly impacted the classifications. In some cases, the model gave inconsistent responses or used languages other than Spanish. Statistical tests confirmed notable differences in most prompt comparisons, except when the structure was similar. These findings challenge the reliability of Large Language Models for classification tasks, revealing their sensitivity to prompt variations. The study also showed that unstructured prompts lead to more hallucinations. Trust in these models depends not just on their technical performance but also on the social and institutional contexts guiding their use.