Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
AI & Beyond - Hybrid
| ||
| Presentations | ||
ID: 399
/ AI & Beyond - HY: 1
Paper Proposal Remote (ONLY for Paper Proposals in English) Topics: Method - Content/Textual/Visual Analysis, Method - Discourse Analysis, Topic - Artifical Intelligence/Machine Learning/Generative and Synthetic Media, Topic - Memes/Humour/Popular Culture Keywords: meme analysis, machine learning, boundary work, public understanding of science, collective identity ON JOKES AND BOUNDARIES: NEGOTIATING THE VALUE OF ML AND AI WORK THROUGH MEMES 1Technical University of Munich, Germany. School of Social Sciences and Technology. Department of Science, Technology and Society.; 2Aarhus University, Denmark. Department of Political Science. Danish Centre for Studies in Research and Research Policy. The rise of Large Language Models (LLMs) capable of generating code has unsettled core assumptions about the nature and value of coding-related professions. In this paper, we analyse how memes navigate shifting perceptions of value and worth in this situation, drawing on valuation studies and scholarship on humorous online memes. Our dataset comprises 401 memes tagged with 'machine learning' and 'artificial intelligence' on ProgrammerHumor.io. We purposively sampled 60 image-macro memes containing humour that assign value to AI/ML work. Through thematic analysis of the textual and visual elements of these memes, we demonstrate how they establish symbolic boundaries between various groups of coders, highlight the perceived limitations of AI-generated code and address the uncertainties surrounding the future of coding work. Firstly, memes differentiate between coders based on their relationship with AI, contrasting skilled programmers who have control over code with 'prompt engineers' and 'vibe coders' who rely on LLMs. Secondly, they highlight the perceived shortcomings of AI-generated code, particularly its unreliability and the burden of debugging, thereby asserting the continued superiority of human expertise. Finally, memes articulate automation anxiety by addressing fears of job replacement, while also emphasising AI's dependence on human-generated code. Together, the memes constitute a dynamic cultural arena in which professionals negotiate the impact of AI coding tools, construct hierarchies of worth and process the rapid changes in valuation in their field. ID: 711
/ AI & Beyond - HY: 2
Paper Proposal Remote (ONLY for Paper Proposals in English) Topics: Method - Discourse Analysis, Method - Historical/Comparative Historical, Topic - Activism/Social Movements/Social Justice, Topic - Colonialism/Post-Colonialism/De-colonialism/Indigenous Studies Keywords: critical AI, AI empire, AI abolition, carceral AI, decolonial cyberpractices AI ABOLITION AS DECOLONIAL RUPTURE IN AI EMPIRE: RADICAL CYBERPRACTICES FROM BELOW 1Syracuse University, United States of America; 2University of Colorado Denver, United States of America Existing research has long established that AI is not just a collection of technical tools but an expansive system of governance: what scholars refer to as AI empire - deeply embedded in racial capitalism, carceral logics, colonial control, and heteropatriarchy (Crawford, 2021; Tacheva & Ramasubramanian, 2023). However, much of the critical scholarship in AI studies tends to focus on AI’s most visible harms, such as mass surveillance, biased decision-making, and AI’s role in warfare. This paper argues that AI empire’s violence is far more insidious and pervasive, and extends beyond these explicit harms to algorithmic systems that actively shape docile populations and reinforce existing hierarchies of power (Benjamin, 2019). In response, this work positions AI abolition as a necessary and decisive rupture that rejects predominantly reformist interventions, which merely tweak AI’s carceral mechanisms without challenging the underlying structures of domination. Drawing from the decolonial queer feminist scholarship of early cybercultural critics like Chela Sandoval, this paper examines historical counter-technological practices, including Indigenous computing, socialist cybernetics, and feminist teleconferencing, as alternative models for technological futures that reject extractive AI governance. By reclaiming these insurgent histories, this work reframes AI abolition as an ongoing practice of refusal and reimagination and argues that meaningful technological transformation must go beyond surface-level mitigation efforts to fundamentally disrupt the oppressive logics embedded in hegemonic AI cultures. ID: 718
/ AI & Beyond - HY: 3
Paper Proposal Onsite - English Topics: Method - Critique/Criticism/Theory, Method - Ethics/Fairness/Accountability/Transparency Analysis, Topic - Academia/Scholarly Practice/Research Practices, Topic - Artifical Intelligence/Machine Learning/Generative and Synthetic Media Keywords: Inclusive AI, Responsible AI, Transdisciplinary methods, digital divide, literacies FROM PRINCIPLES TO PRACTICE: PARTICIPATORY INFRASTRUCTURES FOR INCLUSIVE AI Swinburne University of Technology, Australia As AI technologies become more integrated and embedded into everyday services, activities, and social connectivity, the notion of inclusive AI is increasingly pertinent; yet the concept lacks theoretical cohesion and remains elusive in practice. In this paper we propose a practice-based framework for inclusive AI, developed through collaborations with community, humanitarian, health, legal, and learning organisations. We conceptualise inclusive AI not as a technological output but as a participatory process, situating inclusion at the centre of capability building, equitable benefit, and collective participation. Central to this reframing is the role of participatory infrastructures and intermediaries—individuals, organisations, and public or online spaces—who act as capability converters, translating technical features into community-relevant outcomes. Our evaluative framework extends responsible AI by looking beyond principles and technical fixes, to also asking whether systems enable communities to achieve tangible benefits and equitable participation across design, deployment, and governance. It identifies four dimensions of inclusive AI: safety and accountability, community participation, community benefit, and sustainability and maintenance. We argue that achieving inclusivity requires infrastructures of translation, adaptation, and care, supported by intermediaries who are both technically informed and culturally situated. The framework offers researchers and practitioners a conceptual and evaluative tool to embed inclusivity throughout the AI lifecycle, ensuring systems are not only technically sound but socially meaningful. ID: 979
/ AI & Beyond - HY: 4
Paper Proposal Onsite - English Topics: Method - Ethics/Fairness/Accountability/Transparency Analysis, Method - Network analysis (Social/Semantic), Topic - Infrastructure/Materiality/Sustainability, Topic - Platform Studies Keywords: Knowledge Graphs; Wikidata; Artificial Intelligence; Ontology; LLMs Wikidata’s Worldview: Inspecting an AI Knowledge Pipeline with Semantic Network Analysis Temple University, United States of America As AI systems increasingly depend on structured data to provide meaningful context, understanding the role of knowledge graphs like Wikidata becomes important. A collaborative, multilingual, and free database, Wikidata is at the heart of many AI applications that influence the results of search engines, digital assistants, and automated decision-making systems. It is incumbent on media and communication researchers to understand that machine-readable data is interpretable data and that we must analyze data structure, categorization, and interpretation in the systems that feed the AI knowledge pipeline. This paper provides such an analysis by examining the ontological structure, terminology, and sociocultural biases of Wikidata using semantic network analysis. We expose several problems relating to ambiguous terminology, the classification of concepts, and the social constructions of data entities. We claim that knowledge graphs do not represent objective facts waiting to be transformed into AI communications but instead provide deep cultural assumptions that influence machine communication’s decision-making process. This research calls for radical transparency and criticism of proprietary AI knowledge systems to show their impact on society by allowing researchers to examine the classification architecture of databases used in consumer products. | ||
