GOR 26 - Annual Conference & Workshops
Annual Conference- Rheinische Hochschule Cologne, Campus Vogelsanger Straße
26 - 27 February 2026
GOR Workshops - GESIS - Leibniz-Institut für Sozialwissenschaften in Cologne
25 February 2026
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
5.3: AI and society
| ||
| Presentations | ||
Exploring Differences in ChatGPT Adoption and Usage in Spain: Contrasting Survey and Metered Data Findings RECSM-UPF, Spain Relevance & Research Question What do we talk about when we talk to LLMs? 1Université Paris Nanterre, France; 2Aalto University, Finland; 3Vrije Universiteit Amsterdam; 4Bilendi Relevance & Research Question Commercial LLMs are now part of everyday online life, but we still know strikingly little about what people actually do with them in practice. Here, we present empirical insights upon the content of messages that people exchange with chatbots, such as ChatGPT. There still are few consensual results in the area. A very recent study by OpenAI (Chatterji et al., 2025) found surprising results that partly contradicted earlier research, demonstrating limited gender and education differences and very few “personal” interactions between users and ChatGPT. We use extensive GDPR-complaint data to address two questions: RQ1: Can these recent findings regarding topic distribution and gender/education differences be replicated? RQ2: How personal do conversations with LLMs get, and do they become more personal over time? Methods & Data Our data covers 5 months of conversation records from panel members in Brazil, Germany, Mexico and Spain who agreed to share their internet activity on laptop and/or mobile device (01.06.25–31.10.25; N = 45,200 participants). We collect both HTML streams and in-app contents for six major AI platforms: ChatGPT, Claude, Copilot, Gemini, Meta and Perplexity. We examine the context of the conversations using LLM-based classifiers. In particular, we reproduce the same prompts and data input as those used in Chatterji et al. (2025) for comparability (RQ1). Our data uniquely combines multiple AI system sources and reliable sociodemographics information, putting us in a good position to better understand and assess the divergences in previous studies which were limited in terms of LLM sources and user qualification. LLM-based classifiers enable fine-grained classification on high volumes of data, allowing for new approaches to RQ2 besides mere content classification. We believe the extent to which people get personal with LLMs is underestimated when it is assessed merely through topic classification, since topics other than “self expression” and “relationships”, such as practical guidance or multimedia topics, may also involve personal engagement with the systems. | ||