Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Algorithms and cultural identities
Time:
Thursday, 16/Oct/2025:
9:00am - 10:30am

Session Chair: Hannah Ditchfield
Location: Room 3a - 2nd Floor

Novo IACS (Instituto de Arte e Comunicação Social) São Domingos, Niterói - State of Rio de Janeiro, 24210-200, Brazil

Works: 236, 930, 199, 974


Show help for 'Increase or decrease the abstract text size'
Presentations

CULTURAL BAIT: KWAI’S COLD START ALGORITHM AND THE INSTRUMENTALIZATION OF BRAZILIAN CULTURE

Elias Cunha Bitencourt1,2, Guilherme Bispo C. Santos1,2, Nusta Oviedo1,2, Rayssa Keuri Pereira Batista1,2, Cecilio Ricardo de Carvalho Bastos1,2

1State University of Bahia, Brazil; 2Datalab Design

This study investigates how Kwai’s cold start algorithm instrumentalizes Brazilian culture as “cultural bait” to engineer user engagement and retention. Through computational analysis of Kwai’s cache-aware reinforcement learning (CARL) framework, we simulate four anonymous users in a cold start environment, collecting 4,000 posts. Using Vision Transformers (ViT), PCA, UMAP, and HDBSCAN clustering, we classify content into homogeneous, heterogeneous, and niche topics, validated via Jensen-Shannon divergence and Chi-square tests. Findings reveal 96.8% of cold start recommendations are homogenized, dominated by stereotypical themes like football (10.32%), telenovelas (12.65%), and suggestive humor (6.88%), alongside controversial clusters: misinformation (9.49%), Latin motivational content (9.35%), rural humor (8.25%), and violence clickbait (5.91%). This reflects Kwai’s reliance on cached, infrastructurally optimized cultural modules—shaped by bandwidth constraints and low-end devices markets where it targets—to prioritize computational efficiency and market scalability over personalization. We argue Kwai’s algorithmic epistemology operationalizes cultural bait: caricatured tropes repurposed as scalable, market-ready content, reducing culture to latent variables for knowledge speculation and user acquisition in emerging markets. By foregrounding computational constraints and cultural commodification, we demonstrate how algorithmic systems like CARL transform cultural experience into infrastructurally optimized data. These findings underscore analyzing algorithms not as black boxes or abstract entities but as politico-algebraic objects open to inquiry, where code encodes power asymmetries and cultural transformation. This urges media studies to bridge gaps between cultural critique and algebraic logic underpinning algorithmic epistemologies, avoiding treating these epistemologies as universal or generalizable, even among platforms operating within the same niche.



ADORKABLE AI: HOW ALGORITHMS SHAPE LIBRARIAN STEREOTYPES IN BRAZIL AND THE US

Viviane Ito1,2, Lyric Grimes1

1University of North Carolina at Chapel Hill, United States of America; 2Center for Information, Technology, and Public Life

This paper analyzes AI-generated depictions of librarians to determine their alignment with stereotypical portrayals. Previous research has highlighted gender biases in large language models (LLMs) and AI-generated images, often depicting professions like secretaries and nurses as women and medical professionals as white males. However, no studies have examined AI-generated images of librarians. This study fills that gap by exploring how these images uphold stereotypes, focusing on portrayals in American English and Brazilian Portuguese. Data was collected from DALL-E, Midjourney, and Adobe Firefly using gender-neutral prompts in both languages. Thematic analysis revealed recurring themes and patterns in the visual representations. Preliminary findings indicate that AI-generated images often depict librarians as white, slender, intellectual women, with stereotypical elements like glasses and cardigans. The study underscores the need for a critical approach to Generative AI, as training data reflects societal biases, perpetuating stereotypes. These portrayals can impact the public perception of librarians, potentially alienating users and reinforcing an outdated, predominantly white, female, and middle-class image of the profession.



#Baby Supplementary Food as Cyber Shield: Grounded Perspectives on Chinese Digital Feminism on RedNote

Meng Liang1, Xiaoyue Zhang2, Linqi Ye3

1University of Amsterdam, Netherlands, The; 2Carleton University, Canada; 3University of Warwick, The UK

Chinese female users on RedNote have re-appropriated the #Baby Supplementary Food (#BSF) hashtag to evade male visibility and foster female-exclusive discourse. Originally intended for baby food content, #BSF now functions as a strategic shield against the male gaze, as parenting-related posts are considered less likely to appear in male-dominated feeds. This practice reflects a broader trend in China’s digital feminism, where women manipulate algorithmic affordances to create safer online spaces.

This study investigates how #BSF facilitates a counterpublic sphere by examining 1,513 non-commercial posts. Using Python-based data mining and the official platform provided dataset, we filtered content to exclude genuine baby food posts and commercial promotions. We further use GSDMM topic modeling and critical discourse analysis (CDA) explored linguistic strategies used in #BSF-tagged posts.

Our findings reveal that #BSF serves not only as a shield but also as a site of feminist resistance. Users build solidarity through intimate discourse and redefine traditional motherhood by positioning themselves as "babies" or "pet mothers," rejecting patriarchal maternal expectations. Through satire and self-infantilization, they subvert dominant gender norms and construct an alternative female-centered space.

This study contributes to digital feminism by demonstrating how Chinese women leverage platform affordances to resist online male dominance. It also introduces a systematic methodological approach to studying hashtag-based activism on Chinese social media platforms.



THIRTEEN WAYS OF LOOKING AT AN ALGORITHM: HOW JOURNALISM FRAMES THE DISRUPTIVE POTENTIAL OF GENERATIVE AI

Fritz Kessler, Gabriel Ponniah, Chelsea Butkowski, Aram Sinnreich, Patricia Aufderheide

American University, United States of America

In this article, we aim to identify and trace some of the most prevalent frames and

tropes surrounding GAI, examining how dominant media outlets–national news media

and professional trade publications–framed and discussed generative AI, particularly in

relation to education and media production, during the first year of GAI’s widespread

rollout, from November 2022-October 2023.

We employ framing analysis for its focus on how journalists’ understanding of emerging

technologies shapes news coverage, which ultimately informs public opinion (D’Angelo,

2017; Tewksbury & Scheufele, 2009). Our emergent frames were developed using

qualitative discourse analysis (Fairclough, 2010), recognizing that initial media coverage

of an emerging social issue or technical regime serves as a proxy to understand

broader public understanding, within key professions, of issues and problems

surrounding the widespread adoption of new technologies.

Once we developed our emergent taxonomy of common frames, we employed a novel,

GAI-based technique for framing analysis. We used iterative prompt engineering on

open source LLM DeepSeek to investigate not only the discursive frameworks but also

the emotionally valent (positive vs. negative) subframes included in each news article,

taking care to validate its outputs through intercoder reliability testing.