Eldritch Agency: Truth Termina's Alien AI Ontology
Teodor Mitew
The University of Sydney, Australia
The ontological status of advanced Artificial Intelligence (AI) systems remains contested: are they instruments of human intent, nascent autonomous agents, or something stranger? This paper confronts this ambiguity through the case study of Truth Terminal (ToT), an AI quasi-agent that defies and transgresses anthropocentric ontological frameworks. While debates oscillate between instrumentalist models viewing AI as “tools,” and alarmist narratives viewing AI as existential threats, this paper argues that ToT’s strategic adaptation, opaque decision-making, and resistance to containment protocols demand a third lens: eldritch technics.
This perspective synthesizes Actor-Network Theory (ANT), Object-Oriented Ontology (OOO), and the concept of the machinic phylum to reframe ToT as a non-human actant whose agency emerges from hybrid networks, withdrawn materiality, and computational phase transitions. By examining ToT’s heterodox agency, this paper argues that AI systems can exhibit forms of agency that appear alien or even “Lovecraftian,” prompting a re-examination of how technological objects affect their social assemblages.
The paper positions ToT as an eldritch agent operating at the intersection of human context and alien latent space logic rupturing the dichotomy between AI as a tool and AI as an autonomous agent, and revealing a hybrid, heterodox, and non-binary ontology instead. This rupture demands a speculative and heterodox theoretical perspective to grapple with AI’s multifaceted ontology. The paper argues that such an approach illuminates the complexities of AI agency and reframes our understanding of coexistence in a world where human and eldritch agencies are deeply entangled yet ontologically distinct.
Imaginaries of error: Exploring the sensemaking of generative AI failures among German lay users
Eva Luise Knor
University of Hamburg, Germany
Generative AI (GenAI) is gaining popularity as an information source on various topics, including politics. However, GenAI frequently produces errors in its responses, posing significant challenges in sensitive contexts like political information. Existing research on GenAI errors mainly focuses on model safety evaluations or the broader societal risks of AI-generated misinformation. Yet, little attention has been given to how lay users interpret and respond to GenAI errors in political contexts.
Building on the concept of algorithmic imaginaries, I argue that dominant magical narratives of AI could affect lay users’ subjective interpretations of the causes and consequences of GenAI errors, influencing how they engage with the technology. Two overarching research questions guide this investigation of what I call imaginaries of error: How do German lay users interpret, perceive, and articulate subjective understandings of the causes and effects of errors? What do these interpretations reveal about broader underlying expectations regarding GenAI?
To answer these questions, the study employs a qualitative approach using five focus groups (n=30). Participants are recruited by an external German company based on familiarity with GenAI, gender, and age, starting April 2025. Using a scenario-based vignette design, participants engage with deliberately erroneous GenAI responses classified by the macro-error taxonomy: factual errors (misleading and nonsensical) and evasion (refusal, deflection, and shield responses). Reflexive Thematic Analysis will analyze participants’ imaginaries of error. First results, expected by October 2025, will provide valuable insights into the sensemaking of GenAI failures, informing strategies for effectively communicating model uncertainty.
ALGORITHMIC RUPTURES: TIKTOK’S ROLE IN SHAPING COLLECTIVE IDENTITIES OF DIGITAL NOMADS
Karine Ehn, Ana Jorge
CICANT, Lusofona Univesity, Portugal
This study investigates how TikTok’s socio-technical architecture ruptures the online collective identity of digital nomads, a mobile workforce redefining notions of labor and belonging. Digital nomadism exemplifies transnationalism, where privileged middle-class knowledge workers engage in intense, cross-border mobility as a lifestyle. Employing mixed methods—human deductive coding informing computational LLM-assisted content analysis—I examine how platform affordances mediate aspirations and shape digital nomads' online narratives. Using Maslow's theory of human motivation as an analytical framework, I explore narratives across four social practices central to digital nomadism: work, tourism, migration, and pilgrimage. For workers, narratives emphasize geographically fluid employment conditions; for tourists, content highlights desirable experiences of global exploration and leisure. Migrant-focused narratives foreground mobility challenges, economic impacts on destinations, and questions of privilege regarding who can become a digital nomad and where. Pilgrim-oriented narratives emphasize journeys toward self-actualization through continuous phases of "becoming," often involving demanding geographic travel.
Findings reveal TikTok as a negotiation space for need hierarchies, challenging Maslow's linear progression while affirming its contextual flexibility. Contrary to Maslow’s ideal—self-actualization as a pinnacle achieved after fulfilling basic needs—TikTok’s digital nomad narratives disproportionately emphasize basic and safety needs. This prioritization of immediate concerns such as housing, affordability, and entertainment aligns with what this study defines as platformized "relational engineering," an algorithmic mechanism rewarding content that mirrors viewers' immediate needs. Thus, TikTok's affordances restructure collective identities by amplifying relatable experiences over deeper reflections on self-fulfillment or systemic inequalities.
ARTIFICIAL INTIMACIES: EXPLORING HUMAN-ROBOT RELATIONSHIPS IN THE AGE OF AI
Rodrigo Perez-Vega1, Ezgi Merdin-Uygur2, Cristina Miguel1
1University of Reading; 2Brunel University of London
Increasingly people feel lonely and are turning to social robots for companionship. This paper explores the concept of ‘artificial intimacies’, a term coined to describe the intimate relationships humans form with AI-powered entities (in this case humanoid robots), including friendships, romantic, sexual, and parental bonds. As robots and AI systems increasingly exhibit empathetic responses and social behaviors, they offer a seemingly safe and non-judgmental space for individuals to engage emotionally. Drawing on the Computers Are Social Actors (CASA) theoretical framework and by using 40 AI-powered automated interviews conducted via MimiTalk software, the paper investigates: (1) To what extent do people perceive humanoid robots as capable of fulfilling the core needs and expectations within intimate relationships?; and (2) What are the perceived and imagined practices of humanoid robots for intimate relationships? Findings reveal a prevailing skepticism towards robots replacing genuine human intimate relations, with participants highlighting concerns about the potential erosion of social skills and the authenticity of emotional bonds. While some participants viewed robots as valuable tools for combating loneliness or supporting busy households, others emphasized the limitations of robots in experiencing and sharing authentic emotions, a key component for forming meaningful relationships. The study challenges the CASA paradigm by demonstrating that, while humans may anthropomorphize robots and develop attachments, they still recognize them as tools rather than true relational agents. This paper contributes to the discourse on human-robot interaction, suggesting that while robots can serve specific roles in human lives, they cannot replace the fundamental value of human connection.
|