Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
B2: AI Tools for Survey Research 1
Time:
Thursday, 22/Feb/2024:
12:00pm - 1:15pm

Session Chair: Timo Lenzner, GESIS - Leibniz Institute for the Social Sciences, Germany
Location: Seminar 3 (Room 1.03/1.04)

Rheinische Fachhochschule Köln Campus Vogelsanger Straße Vogelsanger Str. 295 50825 Cologne Germany

Show help for 'Increase or decrease the abstract text size'
Presentations

In Search of the Truth. Are synthetic, AI generated data the future of market research?

Barbara von Corvin, Annelies Verhaeghe

Human8 Europe, Belgium

Relevance & Research Question

Generative AI is the most talked-about topic among insight professionals, with 93% of researchers seeing it as an opportunity for the industry. With the rise of generative AI also came the rise of synthetic data. This data is artificially generated through machine learning techniques, rather than being observed and collected from real-world sources. Are we at the start of an era? What if synthetic data is taking over – being faster, cheaper, better (also in terms of privacy)?

To understand the potential of generative AI systems, Human8 has been on a journey to conduct what we like to call ‘research on research’. Our primary focus has been on understanding how generative AI impacts qualitative research.

Methods & Data

By developing a personal AI research assistant using ChatGPT as an algorithm, we have been able to experiment with AI on also confidential research data and put generative AI to the test.

After we had conducted research with an online community of n=86 human participants, we created a synthetic pendant. With the help of AI and internet data, we developed an online community with 86 synthetic participants that were statistically and structurally identical to the original training data. We asked them the same research questions we had asked the human participants before. We created an artificial dataset with the same characteristics as the real-world dataset but without including any real-world data. And, we compared the outcome.

Results

The results show very clearly how the conditions of ChatGPT influence study results. We will share the findings of our experiments and the learning for using synthetic data moving forward.

Added Value

Our presentation will help better understand the differences between data collected from human beings and synthetic, AI generated data. We will explain the reasons-why and provide some guidance on the cases in which the use of synthetic, AI generated data can be beneficial and, where the use of AI involves a risk.



ChatGPT as a data analyst: focus on the benefits and risks

Daniela Wetzelhütter1, Dimitri Prandner2

1University of Applied Sciences Upper Austria, Austria; 2Johannes Kepler University Linz, Austria

Relevance & Research Question: Simple descriptive results can (now) be generated with ChatGPT in a relatively resource-efficient way - e.g. by generating a syntax code at the push of a button and then supporting the user in interpreting the output. The skills required to formulate the prompts and validate the descriptive results generated are still rather "manageable" compared to more complex analysis procedures (e.g. classification analysis). Errors that can occur start with the use of inappropriate analysis methods and range from the use of incorrect syntax to incorrect interpretation of results. This leads to the question: what are the benefits - and what are the risks of generating (and possibly using) incorrect results - of the new possibilities offered by AI-based data analysis?

Methods & Data: Based on this, the article will focus on the following aspects (using replication datasets with available, tested syntax codes) in the course of replication of already published studies

- Errors in the generated syntax code (e.g. omitting important steps, suggesting inappropriate statistical tests)

- Number of trials required (until an acceptable result is obtained or it is determined that no 'acceptable' result can be obtained)

- Usefulness of the results (e.g., clarity of interpretation, compactness).

Results: The findings can be summarised as follows. The use of tools such as ChatGPT

(1) is convincing for generating a decision basis for the choice of analysis method.

(2) can support simple descriptive data analysis, result description and interpretation in a resource-efficient way.

(3) is only advisable for generating a syntax for carrying out a complex procedure (e.g. MDS) with the appropriate expertise (possible for checking and adapting various specifics) - as it is very error-prone.

Added Value: The presentation focuses on the application of AI-supported data analysis in 'everyday research', which can be amateurish in nature, and emphasises the need to have the necessary skills to ensure the required quality of results. The aim of the resulting research is to develop a specific strategy for efficient 'scientific use'.



Chatbot Design as an Alternative to a Mobile First Design in Web Surveys: Data Quality and Respondent Experience

Ceyda Çavuşoğlu Deveci, Marek Fuchs, Anke Metzler

Technical University of Darmstadt, Germany

Relevance & Research Question

The increasing use of smartphones in Web surveys requires an optimized questionnaire design for smartphones (Dillman et al., 2104). Since instant messaging is an integral part of every-day communication, this study investigates whether an instant messaging interface (chatbot design) can be used to improve the respondents’ experience with Web surveys and whether data quality is comparable to Web surveys using a mobile-first design.

Methods & Data

In 2020, a survey on “Implications of COVID-19 on the Student’s life” has been administered to a sample of 280 university students in Germany. Participants were randomly assigned to either a chatbot design or a mobile-first design. About half of the respondents in each design was invited to use their Smartphone to answer the survey while the other half was instructed to use a large screen device.

Results
Results concerning the respondents’ experience with the chatbot design indicate that even though the perceived level of difficulty of the chatbot design was rated significantly higher compared to the mobile-first design, the chatbot design was rated significantly more inventive and entertaining. Results concerning data quality were incoherent: Overall, item-missing rates of the two Web survey designs were on equal levels. In terms of number of characters of answers to narrative open-ended questions, there was no significant difference between the two designs. By contrast, the chatbot design yielded a lower degree of differentiation in one of two grid questions. Finally, results of the overall survey duration suggest that in the chatbot design group smartphone respondents exhibited marginally shorter response times than respondents using large screen devices.

Added Value

According to this study, the use of a chatbot design improves the respondents’ experience with Web surveys even though the new chatbot design is still rated more difficult compared to the more traditional mobile-first design. Also, a chatbot design has the potential to minimize the burden (response time) of smartphone respondents. Results concerning data quality exhibit a so far inconsistent picture of the chatbot design that asks for a comprehensive assessment.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 24
Conference Software: ConfTool Pro 2.8.101
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany