Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
A1: Survey Methods Interventions 1
Time:
Thursday, 22/Feb/2024:
10:45am - 11:45am

Session Chair: Almuth Lietz, Deutsches Zentrum für Integrations- und Migrationsforschung (DeZIM), Germany
Location: Seminar 1 (Room 1.01)

Rheinische Fachhochschule Köln Campus Vogelsanger Straße Vogelsanger Str. 295 50825 Cologne Germany

Show help for 'Increase or decrease the abstract text size'
Presentations

Providing Appreciative Feedback to Optimizing Respondents – Is Positive Feedback in Web Surveys Effective in Preventing Non-differentiation and Speeding?

Marek Fuchs, Anke Metzler

Technical University of Darmstadt, Germany

Relevance & Research Question

Interactive feedback to non-differentiating or speeding respondents has proven effective in reducing satisficing behavior in Web surveys (Couper et al. 2017; Kunz & Fuchs 2019). In this study we tested the effectiveness of appreciative dynamic feedback to respondents who already provide well-differentiated answers and who already take sufficient time in a grid question. This feedback was expected to elevate overall response quality by motivating optimizing respondents to keep response quality high.

Methods & Data

About N=1,900 respondents from an online access panel in Germany participated in a general population survey on “Democracy and Politics in Germany”. In this study, two 12-item grid questions were selected for randomized field-experiments. Respondents were assigned to either a control group with no feedback, or to experimental group 1 receiving feedback when providing non-differentiated (experiment 1) or fast answers (experiment 2) or to experimental group 2 receiving appreciative feedback when providing well differentiated answers (experiment 1) or when taking sufficient time to answer (experiment 2). Interventions were implemented as dynamic feedback appearing as embedded text bubbles on the question page up to four times and disappearing automatically.

Results
Results concerning non-differentiation confirm previous findings according to which dynamic feedback leads to overall higher degrees of differentiation. By contrast, appreciative feedback to well differentiating respondents seems to be effective in maintaining the degree of differentiation only for respondents with particular long response times. Dynamic feedback to speeders seems to reduce the percentage of speeders and increase the percentage of respondents exhibiting moderate response times. By contrast, appreciative feedback to slow respondents exhibits a contra-intuitive effect and results in significantly fewer respondents with long response times and yields shorter overall response times.
Added Value

Results suggest that appreciative feedback to optimizing respondents has only limited positive effects on response quality. By contrast, we see indications of deteriorating effects when praising optimizing respondents for their efforts. We speculate that appreciative feedback to optimizing respondents is perceived as an indication that they process the question more careful than necessary.



Comparing various types of attention checks in web-based questionnaires: Experimental evidence from the German Internet Panel and the Swedish Citizen Panel

Joss Roßmann1, Sebastian Lundmark2, Henning Silber1, Tobias Gummer1

1GESIS - Leibniz Institute for the Social Sciences, Germany; 2SOM Institute, University of Gothenburg, Sweden

Relevance & Research Question
Survey research relies on respondents’ cooperation during interviews. Consequently, researchers have begun measuring respondents’ attentiveness to control for attention levels in their analyses (e.g., Berinsky et al., 2016). While various attentiveness measures have been suggested, there is limited experimental evidence comparing different types of attention checks with regard to their failure rates. A second issue that received little attention is false positives when implementing attentiveness checks (Curran & Hauser, 2019). Some respondents are aware that their attentiveness is being measured and decide not to comply with the instructions in the attention measurement, leading to incorrect identification of inattentiveness.
Methods & Data
To address these research gaps, we randomly assigned respondents to different types of attentiveness measures within the German Internet Panel (GIP), a probability-based online panel survey (N=2900), and the non-probability online part of the Swedish Citizen Panel (SCP; N=3800). Data were collected in the summer and winter of 2022. The attentiveness measures included instructional manipulation checks (IMC), instructed response items (IRI), bogus items, numeric counting tasks, and seriousness checks, which varied in difficulty and the effort required to pass the task. In the GIP study, respondents were randomly assigned to one of four attention measures and then reported whether they purposefully complied with the instructions or not. The SCP study replicated and extended the GIP study in that respondents were randomly assigned to one early and one late attentiveness measure. The SCP study also featured questions about attitudes toward and comprehension of attentiveness measures.
Results
Preliminary results show that failure rates varied strongly across the different attentiveness measures, and that failure rates were similar in both the GIP and SCP. Low failure rates for most types of attention checks suggest that respondents were generally attentive. The comparatively high failure rates for IMC/IRI type attention checks can be attributed to their high difficulty, serious issues with their design, and purposeful non-compliance with the instructions.
Added Value
We conclude by critically evaluating the potential of different types of attentiveness measures to improve response quality of web-based questionnaires and pointing out directions for their further development.



Evaluating methods to prevent and detect inattentive respondents in web surveys

Lukas Olbrich1,2, Joseph W. Sakshaug1,2,3, Eric Lewandowski4

1Institute for Employment Research (IAB), Germany; 2LMU Munich; 3University of Mannheim; 4NYU

Relevance & Research Question

Inattentive respondents pose a substantial threat to data quality in web surveys. In this study, we evaluate methods for preventing and detecting inattentive responding and investigate its impacts on substantive research.
Methods & Data

We use data from two large-scale non-probability surveys fielded in the US. Our analysis consists of four parts: First, we experimentally test the effect of asking respondents to commit to providing high-quality responses at the beginning of the survey on various data quality measures (attention checks, item nonresponse, break-offs, straightlining, speeding). Second, we conducted and additional experiment to compare the proportion of flagged respondents for two versions of an attention check item (instructing them to select a specific response vs. leaving the item blank). Third, we propose a timestamp-based cluster analysis approach that identifies clusters of respondents who exhibit different speeding behaviors and in particular likely inattentive respondents. Fourth, we investigate the impact of inattentive respondents on univariate, regression, and experimental analyses.
Results

First, our findings show that the commitment pledge had no effect on the data quality measures. As indicated by the timestamp data, many respondents likely did not even read the commitment pledge text. Second, instructing respondents to leave the item blank instead of providing a specific response significantly increased the rate of flagged respondents (by 16.8 percentage points). Third, the timestamp-based clustering approach efficiently identified clusters of likely inattentive respondents and outperformed a related method, while providing additional insights on speeding behavior throughout the questionnaire. Fourth, we show that inattentive respondents can have substantial impacts on substantive analyses.

Added Value

The results of our study may guide researchers who want to prevent or detect inattentive responding in their data. Our findings show that attention checks should be used with caution. We show that paradata-based detection techniques provide a viable alternative while putting no additional burden on respondents.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 24
Conference Software: ConfTool Pro 2.8.101
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany