Providing extra incentives for open voice answers in web surveys
Jan Karem Höhne1, Melanie Revilla2, Mick P. Couper3, Joshua Claassen1
1DZHW; Leibniz University Hannover, Germany; 2RECSM-University Pompeu Fabra, Spain; 3University of Michigan, USA
Relevance & Research Question To reduce respondent burden and increase data quality, survey researchers started to employ open questions with requests for voice instead of text answers. For example, respondents can record these answers through the microphone of their survey device by pressing a recording button facilitating narrations. Research indicates that voice answers, compared to text answers, are longer, consist of more topics, and result in higher validity. However, a key challenge remains the high item-nonresponse rate associated with voice answers. Item-nonresponse reduces the generalizability of results as it is frequently associated with respondent characteristics, such as use of voice input in daily life. This study investigates the following research question: Can we increase the voice answer rate and improve data quality by introducing extra incentives when asking open-ended questions in web surveys?
Methods & Data We conducted a web survey (N = 2,271) in the Netquest opt-in online panel in Spain categorizing respondents into three groups based on their likelihood to provide voice answers: low, medium, and high. This grouping was done on the fly based on respondents’ answers to two questions related to their 1) use of voice input in daily life and 2) trust in the confidential treatment of their answers. Within each group, respondents were randomly assigned to either receive an extra five-point incentive for providing voice answers to two open questions (extra incentive condition) or not (no extra incentive condition).
Results As expected, respondents categorized in the high likelihood group were most likely to provide voice answers, and the extra incentive did not affect the voice answer rate in the low and high likelihood groups. However, it reduced the voice answer rate in the medium likelihood group, contrary to our expectation. The extra incentive was partially associated with indicators of higher answer quality, including topic number.
Added Value Our study provides new insights into the effectiveness of incentives in the context of additional web survey tasks, such as providing voice answers. By focusing on extra incentives and distinguishing between three likelihood groups of respondents with respect to voice answers, it stands out from previous studies.
Alexa, Start the Interview! Respondents’ Experience with Smart Speaker Interviews Compared to Web Surveys
Anke Metzler1, Ceyda Çavuşoğlu Deveci2, Marek Fuchs1
1Technical University of Darmstadt, Germany; 2Former Postdoc at Technical University of Darmstadt, Germany
Relevance & Research Question In recent decades, there has been a shift from interviewer-administered surveys to self-administered modes. While this transition improves efficiency and reduces costs, it also raises concerns about response burden. Text-based web surveys are often perceived as tedious and burdensome due to the lack of social interaction. The growing presence of the Internet of Things (IoT) indroduces new opportunities to address these limitations. Voice assistants, in particular, enable an oral, conversational mode of data collection that may enhance social engagment while maintaining moderate costs. However, little is known about how respondents perceive and experience interviews conducted through voice assistants. This study uses a smart speaker to administer survey questions and compares respondents’ experiences in the smart speaker interview with those in a web survey.
Methods & Data A laboratory experiment was conducted in summer 2025 with 245 participants recruited in the city center of Darmstadt. Using a within-subjects design, each participant completed both a web survey and a smart speaker interview containing the same questions. Immediately afterwards, each mode was evaluated separately. Paradata (e.g., response times, interruptions) and self-reported evaluation measures (e.g., flow, ease of use and user experience) were collected to assess participants’ experiences. Results Preliminary results indicate that respondents’ general satisfaction is significantly lower in the smart speaker interview than in the web survey. Paradata analyses show that responses take longer and interruptions are more likely in the smart speaker interview. These differences explain part of the variation in general satisfaction and also affect self-reports on evaluation items. Longer response times and a higher likelihood of interruptions are negatively associated with perceived flow, ease of use and user experience. Multilevel analyses suggest that the lower satisfaction with the smart speaker interview can largely be explained by these mediating factors. Added Value At this point smart speaker interviews are still in its infancies. This study provides initial insights into respondents’ perceptions of smart speaker interviews and identifies key aspects that require improvement to advance the development and successful implementation of smart speaker interviews in the future.
Do respondents show higher activity and engagement in app-based diaries compared to web-based diaries? A case study using Statistics Netherlands’ Household Budget Diary.
Danielle Remmerswaal1, Bella Struminskaya1, Barry Schouten2
1Utrecht University, The Netherlands; 2Statistics Netherlands
Relevance & Research Question
Smartphones offer opportunities for official statistics, promising improved user experience, reduced response burden, and higher data quality. We investigate whether respondents show higher activity and engagement in app-based diaries compared to traditional web-based diaries.
Methods & Data
We use Statistics Netherlands’ Household Budget Survey (HBS) as a case study. The HBS is a diary survey conducted every five years to capture household expenditure on goods and services. In 2020, Statistics Netherlands conducted a 4-week web-based survey (N = ~3,000). In 2021, they conducted a 2-week app-based survey on a smaller sample (N =~700). We compare participation and response behavior of respondents in the two modes. Among the indicators are the amount and spread of the reporting.
Results
First results show that initial dropout is higher in the web diary. During the first two weeks, dropout is gradual (~1% per day) and very similar across modes. % of registered respondents who submit at least one purchase and/or validate at least one day is higher for the app respondents. More results on objective burden (time spent in study) and reporting patterns will follow.
Added Value
App-based surveys are currently transitioning from the pilot phase to full implementation in panel surveys and official statistics. Our goal is to evaluate the expectation that app-based data collection enhances activity and user engagement in diary studies.
|