Information-Seeking in the Age of Generative AI: Factors That Influence the Behavioural Intention of Media Students to Use ChatGPT
Mohammad Mafizul Islam
Hochschule Darmstadt, Germany
Relevance & Research Question: The widespread use of generative AI tools such as ChatGPT is reshaping how students search for, evaluate, and apply information. This shift is especially critical for media students, who will become future communicators, journalists, and researchers. However, little is known about the specific psychological, social, and technical factors that influence their adoption of generative AI as an academic tool. This study investigates the behavioural intention of media students to use ChatGPT for academic information-seeking. It focuses on understanding how constructs such as performance expectancy, trust, perceived humanness, and availability influence their decision-making.
Methods & Data: The study employed a concurrent mixed-methods design. Quantitative data were collected via an online survey (n = 103) distributed among media students at a German university of applied sciences. Constructs from the UTAUT2 model were adapted and extended to include trust, perceived humanness, and availability. The data were analysed using PLS-SEM. In parallel, six semi-structured interviews were conducted and analysed using concept-driven coding in MAXQDA 24. The integration of qualitative and quantitative data provides both generalisability and depth.
Results: Quantitative findings reveal that performance expectancy and perceived humanness significantly predict behavioural intention to use ChatGPT, while effort expectancy, hedonic motivation, and trust (measured as privacy-related trust) do not. Availability was found to be a key practical driver but did not moderate use behaviour as expected. Qualitative results provide a nuanced picture: students appreciate ChatGPT’s speed and interactive design but consistently question its factual accuracy. Epistemic trust, not privacy, emerges as the decisive factor in usage patterns, leading to widespread verification practices. Human-like responses increase engagement but do not guarantee trust.
Added Value: This research offers one of the first empirically grounded models of generative AI adoption among media students using an extended UTAUT2 framework. It introduces and validates the novel constructs of perceived humanness and availability in technology adoption theory. Practically, the study provides educators, developers, and policymakers with insights into how students use and critically assess AI tools in academic contexts. It highlights the urgent need for AI literacy initiatives that emphasise epistemic caution alongside technical competence.
Exploring Differences in ChatGPT Adoption and Usage in Spain: Contrasting Survey and Metered Data Findings
Melanie Revilla, Carlos Ochoa
RECSM-UPF, Spain
Relevance & Research Question Metered data—a type of digital trace data collected through a tracking application (a “meter”) installed by participants on their browsing devices (PCs, smartphones, or tablets), which records at least the URLs of visited web pages—have attracted growing interest due to their granularity and continuity. However, metered data are also subject to errors, which may differ from those found in survey data. Thus, the main goal of this study is to examine the extent to which results obtained from an online survey differ from those collected via a meter, providing new empirical evidence on a timely and relevant topic with a significant longitudinal dimension: ChatGPT adoption and engagement. Specifically, we aim to advance existing research by investigating the factors associated with variation in the discrepancies between survey-based and metered data—that is, to assess when such differences are likely to be larger or smaller (e.g., we expect larger discrepancies for behaviors that occurred further in the past). Methods & Data To achieve this, we use data from the Netquest opt-in online panel (www.netquest.com) in Spain, comparing various indicators of ChatGPT adoption and engagement between February 2023 and April 2025 across two independent samples of 2,100 panellists each. One sample responded to an online survey, while the other provided metered data, which was used to construct comparable variables to those in the survey. Profiling variables were available for both samples, including socio-demographics, technology ownership at home, and devices used. We employ both descriptive analyses and regression models to compare the two samples and examine whether different factors (e.g., the time elapsed since an event) influence the observed differences. Results Preliminary results suggest that differences between the survey and metered samples can sometimes be substantial, although the magnitude of these differences varies depending on the specific concept being measured. Different factors, such as the time elapsed since the event, also play a role. Final results are still forthcoming. Added Value This research advances understanding of how different data collection methods influence findings, particularly in longitudinal studies, while also offering new insights into ChatGPT’s use in Spanish society.
What do we talk about when we talk to LLMs?
Denis Bonnay1, Juhi Kulshrestha2, Oliveira Marcos3, Orkan Dolay4
1Université Paris Nanterre, France; 2Aalto University, Finland; 3Vrije Universiteit Amsterdam; 4Bilendi
Relevance & Research Question
Commercial LLMs are now part of everyday online life, but we still know strikingly little about what people actually do with them in practice. Here, we present empirical insights upon the content of messages that people exchange with chatbots, such as ChatGPT.
There still are few consensual results in the area. A very recent study by OpenAI (Chatterji et al., 2025) found surprising results that partly contradicted earlier research, demonstrating limited gender and education differences and very few “personal” interactions between users and ChatGPT.
We use extensive GDPR-complaint data to address two questions:
RQ1: Can these recent findings regarding topic distribution and gender/education differences be replicated?
RQ2: How personal do conversations with LLMs get, and do they become more personal over time?
Methods & Data
Our data covers 5 months of conversation records from panel members in Brazil, Germany, Mexico and Spain who agreed to share their internet activity on laptop and/or mobile device (01.06.25–31.10.25; N = 45,200 participants). We collect both HTML streams and in-app contents for six major AI platforms: ChatGPT, Claude, Copilot, Gemini, Meta and Perplexity. We examine the context of the conversations using LLM-based classifiers. In particular, we reproduce the same prompts and data input as those used in Chatterji et al. (2025) for comparability (RQ1). Results Available mid-January 2026 – we are aware this is rather late but we thought the data was exciting enough to try and share at GOR! Added Value
Our data uniquely combines multiple AI system sources and reliable sociodemographics information, putting us in a good position to better understand and assess the divergences in previous studies which were limited in terms of LLM sources and user qualification.
LLM-based classifiers enable fine-grained classification on high volumes of data, allowing for new approaches to RQ2 besides mere content classification. We believe the extent to which people get personal with LLMs is underestimated when it is assessed merely through topic classification, since topics other than “self expression” and “relationships”, such as practical guidance or multimedia topics, may also involve personal engagement with the systems.
|