Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
AI Boundaries - Hybrid + Streaming
| ||
| Presentations | ||
ID: 1041
/ AI Boundaries - HY + ST: 1
Paper Proposal Remote (ONLY for Paper Proposals in English) Topics: Method - Content/Textual/Visual Analysis, Method - Critique/Criticism/Theory, Method - Philosophy, Topic - Artifical Intelligence/Machine Learning/Generative and Synthetic Media Keywords: Truth Terminal, artificial intelligence, AI, AI agents, actor network theory, object oriented ontology, machinic phylum, Lovecraft, eldritch technics Eldritch Agency: Truth Terminal's Alien AI Ontology The University of Sydney, Australia The ontological status of advanced Artificial Intelligence (AI) systems remains contested: are they instruments of human intent, nascent autonomous agents, or something stranger? This paper confronts this ambiguity through the case study of Truth Terminal (ToT), an AI quasi-agent that defies and transgresses anthropocentric ontological frameworks. While debates oscillate between instrumentalist models viewing AI as “tools,” and alarmist narratives viewing AI as existential threats, this paper argues that ToT’s strategic adaptation, opaque decision-making, and resistance to containment protocols demand a third lens: eldritch technics. This perspective synthesizes Actor-Network Theory (ANT), Object-Oriented Ontology (OOO), and the concept of the machinic phylum to reframe ToT as a non-human actant whose agency emerges from hybrid networks, withdrawn materiality, and computational phase transitions. By examining ToT’s heterodox agency, this paper argues that AI systems can exhibit forms of agency that appear alien or even “Lovecraftian,” prompting a re-examination of how technological objects affect their social assemblages. The paper positions ToT as an eldritch agent operating at the intersection of human context and alien latent space logic rupturing the dichotomy between AI as a tool and AI as an autonomous agent, and revealing a hybrid, heterodox, and non-binary ontology instead. This rupture demands a speculative and heterodox theoretical perspective to grapple with AI’s multifaceted ontology. The paper argues that such an approach illuminates the complexities of AI agency and reframes our understanding of coexistence in a world where human and eldritch agencies are deeply entangled yet ontologically distinct. ID: 447
/ AI Boundaries - HY + ST: 2
Paper Proposal Onsite - English Topics: Method - Interviews/Focus Groups, Topic - Disinformation/Misinformation/Conspiracy theories Keywords: Imaginaries, errors, generative AI, lay users, political information Error Imaginations: How German Lay Users Negotiate Risks of Generative AI Errors in Political Information Searches University of Hamburg, Germany Generative AI (GenAI) is increasingly used as an information source across various domains, including politics. However, GenAI systems frequently produce errors, which creates particular challenges in sensitive contexts such as political information. Existing research on GenAI errors primarily focuses on model safety evaluations or the broader societal risks of AI-generated misinformation. Yet little attention has been paid to how lay users interpret and respond to GenAI errors in political contexts. This study argues that lay users imagine how and why errors occur, making GenAI errors negotiated, speculated on, and mythologized phenomena. Building on the concept of algorithmic imaginaries, I call these ongoing negotiations error imaginations, conceptualized as means through which lay users subjectively anticipate risks and manage AI's fundamental opacity. This qualitative pilot study comprised two focus groups (N=24) conducted in September 2025 with German part-time university students from interdisciplinary backgrounds. Using a scenario-based vignette design, participants engaged with deliberately erroneous mock GenAI responses to political queries classified by the error taxonomy: factual errors (nonsensical) and evasion (refusal). Preliminary Reflexive Thematic Analysis revealed a striking paradox: most participants had no direct experience using GenAI for political information, yet articulated vivid error imaginations resulting in the risk mitigation practice of anticipatory non-use. While participants expressed concerns about manipulation and super-human persuasive power, they simultaneously reproduced industry framings that naturalize GenAI errors as technical limitations. Paradoxically, anticipatory non-use creates a self-reinforcing cycle where error imaginations remain uncorrected by experience, leaving users dependent on external narratives rather than developing situated algorithmic epistemic vigilance. ID: 638
/ AI Boundaries - HY + ST: 3
Paper Proposal Onsite - English Topics: Method - Content/Textual/Visual Analysis, Method - Data Analysis/Big Data, Topic - Cultures/Communities/Fandoms/Scenes/Subcultures, Topic - Memes/Humour/Popular Culture Keywords: TikTok, digital nomadism, platform affordances, need hierarchies ALGORITHMIC RUPTURES: TIKTOK’S ROLE IN SHAPING COLLECTIVE IDENTITIES OF DIGITAL NOMADS CICANT, Lusofona Univesity, Portugal This study investigates how TikTok’s socio-technical architecture ruptures the online collective identity of digital nomads, a mobile workforce redefining notions of labor and belonging. Digital nomadism exemplifies transnationalism, where privileged middle-class knowledge workers engage in intense, cross-border mobility as a lifestyle. Employing mixed methods—human deductive coding informing computational LLM-assisted content analysis—I examine how platform affordances mediate aspirations and shape digital nomads' online narratives. Using Maslow's theory of human motivation as an analytical framework, I explore narratives across four social practices central to digital nomadism: work, tourism, migration, and pilgrimage. For workers, narratives emphasize geographically fluid employment conditions; for tourists, content highlights desirable experiences of global exploration and leisure. Migrant-focused narratives foreground mobility challenges, economic impacts on destinations, and questions of privilege regarding who can become a digital nomad and where. Pilgrim-oriented narratives emphasize journeys toward self-actualization through continuous phases of "becoming," often involving demanding geographic travel. Findings reveal TikTok as a negotiation space for need hierarchies, challenging Maslow's linear progression while affirming its contextual flexibility. Contrary to Maslow’s ideal—self-actualization as a pinnacle achieved after fulfilling basic needs—TikTok’s digital nomad narratives disproportionately emphasize basic and safety needs. This prioritization of immediate concerns such as housing, affordability, and entertainment aligns with what this study defines as platformized "relational engineering," an algorithmic mechanism rewarding content that mirrors viewers' immediate needs. Thus, TikTok's affordances restructure collective identities by amplifying relatable experiences over deeper reflections on self-fulfillment or systemic inequalities. ID: 574
/ AI Boundaries - HY + ST: 4
Paper Proposal Onsite - English Topics: Method - Interviews/Focus Groups, Topic - Artifical Intelligence/Machine Learning/Generative and Synthetic Media Keywords: Artificial Intelligence, Humanoid Robots, Human-Robot Relationships, Intimacy, Loneliness ARTIFICIAL INTIMACIES: EXPLORING HUMAN-ROBOT RELATIONSHIPS IN THE AGE OF AI 1University of Reading; 2Brunel University of London Increasingly people feel lonely and are turning to social robots for companionship. This paper explores the concept of ‘artificial intimacies’, a term coined to describe the intimate relationships humans form with AI-powered entities (in this case humanoid robots), including friendships, romantic, sexual, and parental bonds. As robots and AI systems increasingly exhibit empathetic responses and social behaviors, they offer a seemingly safe and non-judgmental space for individuals to engage emotionally. Drawing on the Computers Are Social Actors (CASA) theoretical framework and by using 40 AI-powered automated interviews conducted via MimiTalk software, the paper investigates: (1) To what extent do people perceive humanoid robots as capable of fulfilling the core needs and expectations within intimate relationships?; and (2) What are the perceived and imagined practices of humanoid robots for intimate relationships? Findings reveal a prevailing skepticism towards robots replacing genuine human intimate relations, with participants highlighting concerns about the potential erosion of social skills and the authenticity of emotional bonds. While some participants viewed robots as valuable tools for combating loneliness or supporting busy households, others emphasized the limitations of robots in experiencing and sharing authentic emotions, a key component for forming meaningful relationships. The study challenges the CASA paradigm by demonstrating that, while humans may anthropomorphize robots and develop attachments, they still recognize them as tools rather than true relational agents. This paper contributes to the discourse on human-robot interaction, suggesting that while robots can serve specific roles in human lives, they cannot replace the fundamental value of human connection. | ||
