Conference Agenda
Session | ||
Cyborgs & Bots
| ||
Presentations | ||
Cyborg Imaginaries: A Computational Grounded Theory of Online Pioneer Community Discussions on Human Augmentation University of Zurich, Switzerland Human Augmentation (HA) technologies, such as Brain-Computer Interfaces (BCIs), neurostimulation devices, and microchip implants, are increasingly discussed in online pioneer communities, where early adopters shape imaginaries of technologically mediated human futures. As part of the broader process of digitalization, HA technologies contribute to the platformization of the human body. While these technologies remain experimental, transhumanists and biohackers engage with them as tools for self-enhancement, body modification, and posthuman evolution. These imaginaries are critical to understanding future adoption, yet remain underexplored in scholarly literature. This study applies computational grounded theory (CGT) to analyze discussions on Reddit, identifying emerging sociotechnical imaginaries of HA technologies. Using BERTopic, a transformer-based topic modeling approach, we extract thematic structures from a dataset of 1,503 posts and 60,327 comments spanning 2008–2025. Using BERTopic, a transformer-based topic modeling approach, we extract thematic structures from a dataset of 1,503 posts and 60,327 comments spanning 2008–2025. The imaginaries are then defined through qualitative analysis and iterative refinement of the model, ensuring deeper contextual grounding. Preliminary findings reveal three key dimensions of cyborg imaginaries: (1) Beliefs, including aspirations such as immortality and concerns over job automation; (2) Practices, particularly cognitive and sensory augmentation; and (3) Technological Advances, with discussions centered on BCIs, neural implants, and cybernetic enhancements. This extended abstract presents initial results, contributing to broader discussions on digitalization, human-technology integration, and cyborgization. A Not So Stella(r) Encounter: Discursive Closure in the Age of Artificial Intelligence Chatbots Tulane University, United States of America This study traces the communication practices of resistance and refusal that have followed the adoption of a wellness-centered artificial intelligence (AI) chatbot (colloquially known as ‘Stella’) at a private university in the southeastern part of the United States. Administrators initially brought Stella onto campus in October 2023 to elicit reflection on and cultivate support with respect to two simple questions: “How are you?” and “What makes you feel that way?” The AI product was to be rolled out in stages across student-facing information technology portals, capitalizing upon iterative care networks documented by university mental health providers. This research considers the following question: What resistance strategies and imaginaries of refusal did Stella provoke? What obstacles have students encountered when trying to disconnect from Stella? This project involves in-depth interviews with 25 students forced to communicate with Stella by nature of their enrollment at a university, which does not allow for them to disconnect from the AI chatbot. This was a collaborative undertaking insofar as one of the co-authors of this paper began her empirical journey as a student, who was previously opted into encounters with Stella. She contributes an autoethnographic quest to disconnect from and deactivate the AI chatbot in an institutional environment that initially refused to grant her the ability to detach. A final source of data includes corporate documentation regarding Stella’s promised affordances. Though this work centers on contestation within a single university, it points to broader concerns accompanying the adoption of AI chatbots across institutions of higher education. Algorithmic Fairness in Crisis Communication: How AI Chatbots Shape Public Trust and Engagement 1Northern Illinois University, United States of America; 2University of Wisconsin Madison, USA; 3Peking University, China As AI-driven chatbots become integral to crisis communication, understanding their impact on public trust, fairness, and engagement is crucial. While chatbots provide real-time, scalable crisis response, concerns about transparency, legitimacy, and algorithmic bias persist. This study examines how AI chatbots influence perceptions of procedural and distributive justice in emergency messaging and whether justice-enhancing chatbot prompts improve public trust and compliance. Through two online experiments (N = 415, Study 1; N = 400, Study 2), we assess (1) whether AI chatbots, comment sections, or static crisis information pages affect fairness perceptions and crisis decision-making, and (2) how chatbot messaging strategies shape engagement and risk behavior. Findings from Study 1 reveal that chatbots enhance procedural justice but reduce distributive justice, influencing trust and evacuation willingness. Study 2 optimizes chatbot design, demonstrating that procedural justice-enhancing prompts improve fairness perceptions without diminishing information usefulness, increasing public trust and compliance. This research advances algorithmic governance and digital crisis communication by highlighting the trade-offs in AI-mediated public safety interventions. Findings provide actionable insights for emergency management, platform designers, and policymakers to develop transparent, trustworthy AI-driven crisis response tools that enhance public confidence and engagement in high-stakes situations. "Just Asking Questions": Doing Our Own Research on Conspiratorial Ideation by Generative AI Chatbots Digital Media Research Centre, Queensland University of Technology Interactive chat systems that build on generative artificial intelligence frameworks - such as ChatGPT or Copilot - are increasingly embedded into search engines, Web browsers, operating systems, or available as standalone sites and apps. In a communication ecosystem where information disorder is a persistent threat (Wardle & Derakhshan, 2017), there is the potential for users to utilise chat systems to seek information about conspiracy theories and false claims. Conducting a systematic review of seven AI-powered chat systems, this study examines how these leading products respond to questions related to conspiracy theories. The nine theories chosen for analysis range from historical - such as the JFK assassination conspiracy theories, which have long been debated and debunked - to false claims related to more recent events, such as the idea that Hurricane Milton was geoengineered by Democrats. The chat systems were presented with preset questions which adopted a "casually curious" persona, requesting further information about the chosen conspiracy theories. Our findings to date suggest that AI chat systems are less likely to implement strict safety guardrails around historical conspiracy theories, such as the JFK assassination theories. By contrast, the chat systems were more sensitive to conspiracy theories involving certain minority groups. AI chat systems were also less likely to engage with conspiracy theories related to developing stories and breaking news. In this study, we consider how these patterns affect the role of AI in the information and media ecosystem and explore how AI chat systems may better respond during periods of political transition or division. |