Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
A5.2: Detecting Undesirable Response Behavior
Time:
Friday, 23/Feb/2024:
11:45am - 12:45pm

Session Chair: Jan-Lucas Schanze, GESIS - Leibniz-Institut für Sozialwissenschaften, Germany
Location: Seminar 3 (Room 1.03/1.04)

Rheinische Fachhochschule Köln Campus Vogelsanger Straße Vogelsanger Str. 295 50825 Cologne Germany

Show help for 'Increase or decrease the abstract text size'
Presentations

Who is going back and why? Using survey navigation paradata to differentiate between potential satisficers and optimizers in web surveys

Daniil Lebedev1, Peter Lugtig2, Bella Struminskaya2

1GESIS – Leibniz-Institut für Sozialwissenschaften in Mannheim, Germany; 2Utrecht University, Netherlands

Relevance & Research Question:

Survey navigation paradata presents a unique opportunity to delve into the web survey completion behavior of respondents, particularly actions like revisiting questions and potentially altering answers. Such behavior could be indicative of motivated misreporting, especially when respondents revisit filter or looping questions to modify answers and circumvent subsequent inquiries — a manifestation of satisficing behavior. Conversely, altering answers upon revisiting may also signify optimizing behavior, where respondents strive for utmost accuracy.

This study focuses on the revisiting behavior of web survey respondents, aiming to quantify its frequency, identify associated respondent characteristics, and ascertain who shortens their questionnaire through revisiting.

Methods & Data:

Using paradata from the probability-based online-administered Generations and Gender Programme (GGP) survey in Estonia (N=8916), we analyze the frequency of revisiting questions, characteristics of these questions, and the ensuing actions. We investigate the connection between revisiting behavior and respondent characteristics using a zero-inflated Poisson regression model and check which respondents’ characteristics were connected with a higher proportion of shortening the questionnaire as a result of revisiting questions.

Results:

We find a discernible pattern of revisiting questions during the survey, notably prevalent in immediate filter questions, where almost half of respondents go back after a filter question (that can change the routing of the questionnaire).
Contrary to our expectations, the regression analysis did not conclusively support revisiting as a sole indicator of satisficing behavior. The questionnaire size emerged as the most influential factor in revisiting behavior, suggesting that larger questionnaires may burden respondents and potentially lead to motivated misreporting—a form of strong satisficing behavior.
The revisiting observed may reflect respondents' strategies to optimize responses or alleviate survey burden. The complexity of the questionnaire, coupled with respondent motivation and cognitive ability, plays pivotal roles in shaping revisiting behavior, particularly in the case of immediate filters where revisiting may lead to questionnaire shortening.

Added Value:

This study contributes a nuanced understanding of respondents' behavior during web survey self-completion. Utilizing paradata enhances insights into respondents' survey completion patterns and various behavioral types, providing valuable insights for survey design and data quality management.



Socially Desirable Responding in Panel Studies – Does Repeated Interviewing Affect Answers to Sensitive Behavioral Questions?

Fabienne Kraemer

GESIS - Leibniz Institute for the Social Sciences

Relevance and Research Question:

Social desirability (SD-) bias (i.e., the tendency to report socially desirable opinions and behaviors instead of revealing true ones) is a widely known threat to response quality and the validity of self-reports. Previous studies investigating socially desirable responding in a longitudinal context provide mixed evidence on whether SD-bias increases or decreases with repeated interviewing and how these changes affect response quality in later waves. However, most studies were non-experimental and only suggestive of the underlying mechanisms of observed changes in SD-bias over time.

Methods and Data:

This study investigates socially desirable responding in panel studies using a longitudinal survey experiment comprising six panel waves. The experiment manipulated the frequency of receiving identical sensitive questions (target questions) and assigned respondents to one of three groups: One group received the target questions in each wave (fully conditioned), the second group received the target questions in the last three waves (medium conditioned), and the control group received the target questions only in the last wave of the study (unconditioned). The experiment was conducted within a German non-probability (n = 1,946) and a probability-based panel study (n = 4,660), resulting in 2x3 experimental groups in total. The analysis focusses on between-group and within-group comparisons of different sensitive behavioral measures. It further includes measures on the questions’ degree of sensitivity as a moderating variable. These measures result from an additional survey (n = 237) in which respondents were asked to rate the sensitivity of multiple attitudinal and behavioral questions. To further examine the underlying mechanisms of change, I use a measure on respondents’ trust towards the survey (sponsor) and the scores of an established SD-scale.

Results:

Results will be presented at the conference in February.

Added Value:

Altogether, this study provides experimental evidence on the impact of repeated interviewing on changes in social desirability bias. It further contributes to the understanding of what causes these changes by examining different levels of exposure to identical sensitive questions and including measures on respondents’ trust towards the survey (sponsor) and their scores on a SD-scale.



Distinguishing satisficing and optimising web survey respondents using paradata

Daniil Lebedev

GESIS – Leibniz-Institut für Sozialwissenschaften in Mannheim, Germany

Relevance & Research Question
Web surveys encounter a critical challenge related to measurement error and diminishing data quality, primarily stemming from respondents' engagement in satisficing behavior. Satisficing reflects suboptimal execution of cognitive steps in the answering process. Paradata, encompassing completion time, mouse movements, and revisiting survey sections, among other metrics, serve to assess respondents' cognitive effort, multitasking tendencies, and motivated misreporting. Despite their individual usage, a comprehensive examination combining various paradata types to discern patterns of satisficing and optimizing behavior has been lacking.

This study seeks to investigate the interplay between different paradata types and data quality indicators derived from survey data, aiming to identify distinct patterns characterizing respondents' satisficing and optimizing behaviors.

Methods & Data

Employing a laboratory two-wave experiment with a crossover design involving 93 students, participants were randomly assigned to either satisficing or optimizing conditions in the first wave, with groups reversed in the second. Participants were asked to complete a web survey in either satidficing or in optimising manner. Manipulation checks were used to ensure participants' compliance with a condition. The survey encompassed open-ended, factual, and matrix questions, coupled with reliable scales gauging trust, values, and other sociological and psychological measures. Paradata, such as completion time, mouse movements, browser focus, reaction to warnings, scrolling, and resizing, were collected using the One Click Survey (1ka.si) online software.
Results
The results revealed that respondents in the optimizing condition exhibited higher data quality compared to those in the satisficing condition, as evidenced by test-retest reliability, completion time, straightlining, and subjective cognitive load. Exploratory factor analysis was employed to scrutinize patterns of advanced paradata values in tandem, shedding light on disparities in survey completion strategies between optimizing and satisficing conditions. The study elucidates the connections between satisficing or optimizing behavior and data quality indicators derived from paradata and survey responses.

Added Value
This research advances the understanding of satisficing behavior in web surveys by analysing diverse paradata types and uncovering distinctive patterns in respondents' behavior. The findings emphasize the potential of utilizing combined paradata to gain nuanced insights into the survey completion process, thereby enhancing overall data quality.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 24
Conference Software: ConfTool Pro 2.8.101
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany