Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
5.3: LLMs and Synthetic Survey Data
Time:
Tuesday, 01/Apr/2025:
3:45pm - 4:45pm

Session Chair: Johanna Hölzl, University of Mannheim, Germany
Location: Hörsaal C


Show help for 'Increase or decrease the abstract text size'
Presentations

Synthetic Respondents, Real Bias? Investigating AI-Generated Survey Responses

Charlotte Pauline Müller1, Bella Struminskaya2, Peter Lugtig2

1Lund University, Sweden; 2Utrecht University, The Netherlands

Relevance & Research Question

The idea to simulate survey respondents has lately been seen as a promising data collection tool in academia and market research. However, previous research has shown that LLMs are likely to reproduce human biases and stereotypes existent in their training data. Because of this, we further investigate the potential benefits and challenges of creating synthetic response datasets by following two major aims: 1. investigate whether AI tools can replace real survey respondents, and if yes, for which questions and topics, and 2. explore whether intentional prompts reveal underlying biases in AI prediction.

Methods & Data

We compare already existing survey data from the German General Social Survey (Allbus) 2021, to AI-generated synthetic data with the OpenAI model GPT-4. For this, we took a random sample of 100 respondents from the Allbus dataset and created a so-called AI-Agent for each. Each Agent was calibrated based on general instructions and individual background information (14 variables). We chose to predict three different types of outcomes, including a numerical, binary, and open text/string format, each of them inheriting the potential to provoke certain biases, such as social desirability, gender, and age stereotypes. Furthermore, each item was tested across different contextual factors, such as AI model calibration and language settings.

Results

We found a deep lack of accuracy in the simulation of survey data for both numerical (r = -0.07, p = 0.6) as well as binary outcomes (χ² (1) = 0.61, p = 0.43, V = 0.1), while the explanatory power of the background variables for the predicted outcome, was high for both the former ( = 0.4) and the latter ( = 0.25). Furthermore, we found no difference in the prediction accuracy between different input languages and AI model calibrations. While predicting open-text answers, individual background information was generally well considered by the AI tool. However, several potential biases became apparent, such as age, gender, and regional biases.

Added Value

Our research contributes to a more ethically responsible application of AI tools in data simulation, highlighting an urgent need for more caution in the already-started utilization of AI-generated datasets.



Talk, talk, talk - unlocking AI for conversational research

Barbara von Corvin, Francesca Biscione Biscione

Human8 Europe, Belgium

Relevance & Research Questions

Now, also AI-moderation plugins can be used for conversational research. There are lots of opinions on conversational AI and AI-moderated interviews. Often these tools are assumed to bring in more speed and efficiency or considered as an option to conduct qualitative research with a large sample, like quantitative research. Human8 has used AI moderation technology and examined where and how this can provide benefits or what limitations there are.

Methods & Data

In 2024, AI-moderated interviews have been implemented into research. The participant reads or is read a question and can then answer the question using voice. Research participants can record their responses instead of typing them. AI automatically transcribes the feedback and goes even one step further by probing intelligently, asking relevant follow- up questions by considering both the project objectives that we shared and the participant feedback. And AI also helps with the data processing. In an A-B-experiment with n=30, we compared traditional feedback in insight communities, where participants typically engage asynchronously with typed responses, to AI-moderated interviews, where participants respond vocally and receive dynamic, real-time follow-up questions from the AI.

Results

We found that AI tools for conversational research are an enrichment – linked to a variety of benefits, opening up new options for qualitative research. We captured twice as much data. Talking freely allowed participants to avoid over-rationalizing or filtering their emotions, resulting in feedback that was richer, more emotional and more contextual. AI now enables us to use voice at scale. And, participants were highly satisfied. They liked using the tool. The combination of voice, AI-driven probing and processing of voice and human analysis allowed us to unlock actionable insights. It gave our client the depth they needed to fuel their activation strategy.

Added Value

Our results examine use cases of AI moderation technology and uncover what to consider when using these tools. We will share our learnings on how to unlock the potential of these tools with the audience and open the discussion.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 25
Conference Software: ConfTool Pro 2.8.105
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany