Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
A4.1: Innovation in Interviewing & Coding
Time:
Thursday, 22/Feb/2024:
5:00pm - 6:00pm

Session Chair: Jessica Donzowa, Max Planck Institute für demographische Forschung, Germany
Location: Seminar 1 (Room 1.01)

Rheinische Fachhochschule Köln Campus Vogelsanger Straße Vogelsanger Str. 295 50825 Cologne Germany

Show help for 'Increase or decrease the abstract text size'
Presentations

Exploring effects of life-like virtual interviewers on respondents’ answers in a smartphone survey

Jan Karem Höhne1,2, Frederick G. Conrad3, Cornelia Neuert4, Joshua Claassen1

1German Center for Higher Education Research and Science Studies (DZHW); 2Leibniz University Hannover; 3University of Michigan; 4GESIS - Leibniz Institute for the Social Sciences

Relevance & Research Question
Inexpensive and time-efficient web surveys have increasingly replaced survey interviews, especially conducted in person. Even well-known social surveys, such as the European Social Survey, follow this trend. However, web surveys suffer from low response rates and frequently struggle to assure that the data are of high quality. New advances in communication technology and artificial intelligence make it possible to introduce new approaches to web survey data collection. Building on these advances, we investigate web surveys in which questions are read through life-like virtual interviewers and in which respondents answer through selecting options from rating scales, incorporating features of in-person interviews in self-administered web surveys. This has the great potential to improve data quality through the creation of rapport and engagement. We address the following research question: Can we improve data quality in web surveys by programming life-like virtual interviewers reading questions aloud to respondents?
Methods & Data
For this purpose, we are currently conducting a smartphone survey (N ~ 2,000) in Germany in which respondents are randomly assigned to virtual interviewers that vary in gender (male or female) and clothing (casual or business casual) or a text-based control interface (without a virtual interviewer). We employ three questions on women’s role in the workplace and several questions for evaluating respondents’ experience with the virtual interviewers.
Results
We will examine satisficing behavior (e.g., primacy effects and speeding) and compare respondents’ evaluations of the different virtual interviewers. We will also examine the extent to which data quality may be harmed by socially desirable responding when the respondents’ gender and clothing preference match those of the virtual interviewer.
Added Value
By employing life-like virtual interviewers, researchers may be able to deploy web surveys that include the best of interviewer- and self-administered surveys. Thus, our study provides new impulses for improving data quality in web surveys.



API vs. human coder: Comparing the performance of speech-to-text transcription using voice answers from a smartphone survey

Jan Karem Höhne1,2, Timo Lenzner3

1German Center for Higher Education Research and Science Studies (DZHW); 2Leibniz University Hannover; 3GESIS - Leibniz Institute for the Social Sciences

Relevance & Research Question
New advances in information and communication technology, coupled with a steady increase in web survey participation through smartphones, provide new avenues for collecting answers from respondents. Specifically, the built-in microphones of smartphones allow survey researchers and practitioners collecting voice instead of text answers to open-ended questions. The emergence of automatic speech-to-text APIs transcribing voice answers into text pose a promising and efficient way to make voice answers accessible to text-as-data methods. Even though there are various studies indicating a high transcription performance of speech-to text APIs, these studies usually do not consider voice answers from smartphone surveys. We address the following research question: How do transcription APIs perform compared humans?
Methods & Data
In this study, we compare the performance of the Google Cloud Speech API and a human coder. We conducted a smartphone survey (N = 501) in the Forsa Omninet Panel in Germany in November 2021 including two open-ended questions with requests for voice answers. These two open questions were implemented to probe two questions from the modules “National Identity” and “Citizenship” of the German questionnaires of the International Social Survey Programme (ISSP) 2013/2014.
Results
The preliminary results indicate that human coder provides more accurate transcriptions than the Google Cloud Speech API. However, the API is much more cost- and time-efficient than the human coder. In what follows, we determine the error rate of the transcriptions for the API and distinguish between no errors, errors that do not affect the interpretability of the transcriptions (minor errors), and errors that affect the interpretability of the transcriptions (major errors). We also analyze the data with respect to error types, such as misspellings, word separation error, and word transcription error. Finally, we investigate the association between these transcription error forms and respondent characteristics, such as education and gender.
Added Value
Our study helps to evaluate the usefulness and usability of automatic speech-to-text transcription in the framework of smartphone surveys and provides empirical-driven guidelines for survey researchers and practitioners.



Can life-like virtual interviewers increase the response quality of open-ended questions?

Cornelia Neuert1, Jan Höhne2, Joshua Claaßen2

1GESIS Leibniz Institute for the Social Sciences, Germany; 2DZHW; Leibniz University Hannover

Relevance & Research Question

Open-ended questions in web surveys suffer from lower data quality compared to in-person interviews, resulting in the risk of not obtaining sufficient information to answer the research question. Emerging innovations in technology and artificial intelligence (AI) make it possible to enhance the survey experience for respondents and to get closer to face-to-face interactions in web surveys. Building on these innovations, we explore the use of life-like virtual interviewers as a design aspect in web surveys that might motivate respondents and thereby improve the quality of the responses.

We investigate the question of whether a virtual interviewer can help to increase the response quality of open-ended questions.

Methods & Data

In a between-subjects design, we randomly assign respondents to four virtual interviewers and a control group without an interviewer. The interviewers vary with regard to gender and visual appearance (smart casual vs. business casual). We compare respondents’ answers to two open-ended questions embedded in a smartphone web survey with participants of an online access panel in Germany (n=2,000).

Results

The web survey will run in November 2023. After data collection, we analyze responses to the open-ended questions based on various response quality indicators (i.e., probe nonresponse, number of words, number of topics, response times).

Added ValueThe study provides information on the value of implementing virtual interviewers in web surveys to improve respondents experience and data quality, particularly for open-ended questions.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 24
Conference Software: ConfTool Pro 2.8.101
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany