Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
AI & Hype (traditional panel)
Time:
Thursday, 31/Oct/2024:
9:00am - 10:30am

Session Chair: Jean Burgess
Location: INOX Suite 2

50 attendees

Show help for 'Increase or decrease the abstract text size'
Presentations

From Controversy to Codification: Post Lee-Luda AI Ethics and Sociotechnical Imaginaries of South Korea

Jiwon Jenn Oh1, Jane Pyo2

1University of Illinois Urbana-Champaign, United States of America; 2University of Massachusetts at Amherst

The Lee Luda controversy was a pivotal moment that inaugurated nationwide discourses surrounding artificial intelligence (AI) ethics in South Korea. As a conversational chatbot designed to simulate lifelike conversations, Lee Luda quickly gained attention for its human-like interaction capabilities but soon became the center of controversy due to its use of private human conversations for training, leading to unintended disclosures of personal details and generating responses filled with hate speech and sexual content manipulation. This incident led to the suspension of the service in three weeks.

This paper explores the aftermath of the Lee Luda incident, focusing on the emerging landscape of AI ethics in South Korea. It analyzes the instructional discourses and values foregrounded across different institutions through a critical discourse analysis of two sets of ethical guidelines developed in response to the controversy. Drawing upon critical AI studies and feminist data studies, the study examines how capitalist influences and corporate interests shape the ethical framing of AI technology, highlighting the complex interplay between regulatory initiatives and corporate practices.

The incident underscores the broader struggle to define the ethical boundaries of AI technology, reflecting the tension between technological advancement and ethical responsibility. This tension shapes the nation's sociotechnical imaginary, a concept that captures the intertwined dynamics of science, technology, and societal aspirations. The paper argues that the Lee Luda case exemplifies a shift toward a more democratic and sustainable sociotechnical imaginary in South Korea, challenging conventional technological developmentalism and reflecting a broader societal reflection on AI ethics and governance.



TIKTOK’S AI HYPE - CREATORS’ ROLE IN SHAPING (PUBLIC) AI IMAGINARIES

Vanessa Richter

University of Amsterdam, Netherlands, The

Artificial Intelligence (AI), often hailed as a transformative force, has become an ambivalent buzzword, simultaneously promising utopian possibilities and fueling dystopian anxieties. Social media platforms have emerged as pivotal spaces where the public narrative about AI takes shape, especially through content creators, significantly influencing our collective vision of the future with AI. Therefore, this paper inquires into the role of creators in shaping public imaginaries of AI through their AI content. The paper is based on TikTok as a site of entrance for investigating the role of creators in shaping ongoing discourses around AI through short video content. To understand the role of creators within this ongoing AI discourse, a hashtag network analysis is paired with a critical discourse analysis of creators’ AI content. The preliminary results show three dominant genres of AI content based on 1) AI tools output, especially visual content, 2) listicles on AI tools for different tasks, and 3) educational and critical AI content. Considering the creator types behind the content, a high amount of content is produced by content farms followed by tech TokTokers. Media outlets and commentary TikTokers dominate the third content section. Overall, four types of AI imaginaries are foregrounded. AI mystification envisions AI as fast-paced and inherently life-changing. Similarly, AI futuristic content makes AI out as inevitable. Contrastingly, a high AI pragmatism is prevalent in the ongoing tool discourse, while critical and educational content counteracts these imaginaries with a strong AI realism highlighting the complex and nuanced aspect of AI.



AI AS “UNSTOPPABLE” AND OTHER INEVITABILITY NARRATIVES IN TECH: ON THE ENTANGLEMENT OF INDUSTRY, IDEOLOGY, AND OUR COLLECTIVE FUTURES

Zhasmina Tacheva1, Sarah Appedu1, Mirakle Wright2

1Syracuse University, United States of America; 2University of Colorado Denver, United States of America

The world today is awash with narratives of artificial intelligence (AI) as an "inevitable," "unstoppable" force destined to "revolutionize" society. Using the concept of "entanglement" from Black and Indigenous feminist science, technology, and society studies, we provide a critical examination of the AI industry's complex intersections with sociopolitical dynamics, technological determinism, and oppression. Instead of viewing AI advancement as a predetermined path, we show how this perspective is socially constructed and has dire consequences for the environment and society alike, especially for marginalized communities in the global and local Souths.

Using discourse analysis and critical quantitative techniques, we analyze the discourses surrounding AI, examining over 1,200 pieces of digital content to show how language shapes both technology's development and our collective imaginations. By tracing the ideological roots of AI back to colonial and eugenic practices, we demonstrate the industry's deep entanglement with interlocking systems of oppressions.



“A.I. IS HOLDING A MIRROR TO OUR SOCIETY”: LENSA AND THE DISCOURSE OF VISUAL GENERATIVE AI

Kate Miltner

University of Sheffield, United Kingdom

This paper analyzes the global English-language press coverage of generative AI app Lensa and finds that it echoes existing technological discourses, focusing on the app’s predatory data practices, the biased content it produced, and the user behaviors associated with it. I argue that this coverage provides evidence of discursive closure (Deetz, 1992; Leonardi & Jackson, 2003; Markham, 2021) around both the risks and the potential of visual generative AI in a manner that supports the maintenance of the status quo. I also suggest that the press coverage of Lensa, which both articulates key AI-related harms and frames those harms as intractable and insolvable, creates a discourse of inevitability (Leonardi & Jackson, 2003; Markham, 2021) that has implications for how these issues are understood by the public, and for the approaches that are taken to address them.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AoIR2024
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany