ON JOKES AND BOUNDARIES: HOW MACHINE LEARNING PRACTITIONERS NAVIGATE HYPED EXPECTATIONS THROUGH MEMES
Dominic Lammar1, Oksana Dorofeeva2
1Technical University of Munich, Germany. School of Social Sciences and Technology. Department of Science, Technology and Society.; 2Aarhus University, Denmark. Department of Political Science. Danish Centre for Studies in Research and Research Policy.
Public discourses on Artificial Intelligence (AI) often feature exaggerated expectations that diverge from the everyday experiences of professionals in the AI field, such as machine learning (ML) practitioners. This paper presents a work-in-progress study on how this gap is addressed in the machine learning (ML) online culture, specifically in the form of memes. To investigate this, we draw on 192 memes posted on a machine learning section of a website collecting tech memes (ProgrammerHumor.io). Our analysis of this material is informed by two theoretical approaches - science and technology studies and (e)valuation studies - allowing us to discuss the boundary work within the AI/ML community that takes place in memes. For instance, ML memes often require specific knowledge and awareness of jargon to be in on a joke, and memes as forms of humour are a way of constructing communities and (re)creating hierarchies. We examine these dynamics within ML memes, focusing on the conflicting valuations of AI/ML technologies. First, we look at how the memes differentiate between professional and scientific categories under the umbrella of ‘AI’. Secondly, we are interested in how the general, non-expert audience and their expectations of technology are depicted in the memes - often in contrast with the insider, professional perception. In doing so, this study contributes to both scholarship on technology hype and to the field of meme studies by considering memes as sites of value contestation. Moreover, by examining memes related to the tech communities, we consider the expertise aspect of memetic humour.
AI ABOLITION AS DECOLONIAL RUPTURE IN AI EMPIRE: RADICAL CYBERPRACTICES FROM BELOW
Zhasmina Tacheva1, Sarah Appedu1, Jeongbae Choi1, Mirakle Wright2, Yigang Qin1
1Syracuse University, United States of America; 2University of Colorado Denver, United States of America
Existing research has long established that AI is not just a collection of technical tools but an expansive system of governance: what scholars refer to as AI empire - deeply embedded in racial capitalism, carceral logics, colonial control, and heteropatriarchy (Crawford, 2021; Tacheva & Ramasubramanian, 2023). However, much of the critical scholarship in AI studies tends to focus on AI’s most visible harms, such as mass surveillance, biased decision-making, and AI’s role in warfare. This paper argues that AI empire’s violence is far more insidious and pervasive, and extends beyond these explicit harms to algorithmic systems that actively shape docile populations and reinforce existing hierarchies of power (Benjamin, 2019). In response, this work positions AI abolition as a necessary and decisive rupture that rejects predominantly reformist interventions, which merely tweak AI’s carceral mechanisms without challenging the underlying structures of domination. Drawing from the decolonial queer feminist scholarship of early cybercultural critics like Chela Sandoval, this paper examines historical counter-technological practices, including Indigenous computing, socialist cybernetics, and feminist teleconferencing, as alternative models for technological futures that reject extractive AI governance. By reclaiming these insurgent histories, this work reframes AI abolition as an ongoing practice of refusal and reimagination and argues that meaningful technological transformation must go beyond surface-level mitigation efforts to fundamentally disrupt the oppressive logics embedded in hegemonic AI cultures.
Re-defining inclusive AI: A critical capabilities framework for bridging theory and practice
Dominique Carlon, Anthony McCosker
Swinburne University of Technology, Australia
As AI technologies become more integrated and embedded into everyday services, activities, and social connectivity, the notion of inclusive AI is increasingly pertinent; yet the concept lacks theoretical cohesion and remains elusive in practice. In this paper, we present a framework in two interrelated parts for practice-based research that embeds inclusive AI in real-world contexts. First, we propose a definitional framework for inclusive AI that allows for mapping and linking intervention points across the disciplinary boundaries influencing its development. The definition, containing five pillars for intervention, is premised upon the sociotechnical proposition that the norms, assumptions, and values embedded in AI systems confer power, not only during the design phase but also in the deployment and integration of these systems into institutions, as well as in user engagement, and exclusion. Building on this definition, we outline our approach to implementing the pillars of inclusive AI in real-world contexts through a sociotechnical focus on critical capabilities, aimed at identifying, evaluating, and developing the necessary human and machine capabilities to realise the five pillars of inclusive AI in practice. These two elements together form the ‘critical capabilities for inclusive AI framework’ which can guide transdisciplinary approaches to inclusive AI through participatory methods, technical testing and empirical evidence building. In our case, this approach situates AI developments in the Australian context where through specific domain applications and usage across diverse population groups we are investigating the extent to which inclusive AI can move beyond a collection of aspirational principles, towards practice.
Wikidata’s Worldview: Inspecting an AI Knowledge Pipeline with Semantic Network Analysis
Andrew Iliadis, Mikayla Brown
Temple University, United States of America
As AI systems increasingly depend on structured data to provide meaningful context, understanding the role of knowledge graphs like Wikidata becomes important. A collaborative, multilingual, and free database, Wikidata is at the heart of many AI applications that influence the results of search engines, digital assistants, and automated decision-making systems. It is incumbent on media and communication researchers to understand that machine-readable data is interpretable data and that we must analyze data structure, categorization, and interpretation in the systems that feed the AI knowledge pipeline. This paper provides such an analysis by examining the ontological structure, terminology, and sociocultural biases of Wikidata using semantic network analysis. We expose several problems relating to ambiguous terminology, the classification of concepts, and the social constructions of data entities. We claim that knowledge graphs do not represent objective facts waiting to be transformed into AI communications but instead provide deep cultural assumptions that influence machine communication’s decision-making process. This research calls for radical transparency and criticism of proprietary AI knowledge systems to show their impact on society by allowing researchers to examine the classification architecture of databases used in consumer products.
|