GOR 26 - Annual Conference & Workshops
Annual Conference- Rheinische Hochschule Cologne, Campus Vogelsanger Straße
26 - 27 February 2026
GOR Workshops - GESIS - Leibniz-Institut für Sozialwissenschaften in Cologne
25 February 2026
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
7.2: New insights on satisficing
| ||
| Presentations | ||
Is ‘don’t know’ good enough? Maximizing vs satisficing decision-making tendency as a predictor of survey satisficing Technical University Darmstadt, Germany Relevance & Research Question Respondents who do not go through the question answer process optimally and instead exhibit satisficing behavior are a longstanding problem for survey researchers. A growing body of studies examines stable, time-invariant, predictors of survey satisficing behavior, such as personality traits. These predictors introduce the possibility to measure the potential to satisfice before actual survey satisficing behavior occurs. This study wants to add to this research by introducing a potentially stable and reliable predictor of survey satisficing. Building on the notion that the question-answer process is a decision-making process and satisficing behavior is caused by low aspiration decisions, the effect of a decision-making tendency to maximize (in opposition to satisfice) on survey satisficing behavior is modelled. Data was gathered in October 2024 from 2,911 respondents within the Bilendi non-probability online access panel. Due to its short length, the generalizability of items and reduced dimensions of the scale, the modified maximizing scale by Lai (2010) is applied in this study to measure maximizing with its opposite pole satisficing. As dependent variables, “don’t know” responding and midpoint choosing in four single choice questions was measured. To explain each of them, two multilevel models with questions on level 1 and persons on level 2 were calculated to grasp four questions in one model. Langer’s (2020) extension of the McKelvey & Zavoina (1975) Pseudo R² for multilevel logistic regression models was obtained for the baseline models. Respondents who score medium to high on the maximizing scale exhibit significantly less “don’t know” responding and midpoint choosing than those scoring lower. However, the magnitude of the effect is small. The maximizing scale explains only a small amount of between-person variance in the examined satisficing behaviors. By affecting satisficing behavior in surveys, maximizing is not only a source of bias in survey data. The short and compact scale can be integrated in surveys to measure a respondent’s potential to exhibit satisficing behavior before it occurs. With the help of LLMs, consecutive survey questions could be tailored to said potential.
Measuring Response Effort and Satisficing with Paradata: A Process-Based Approach in the Czech GGS II Masaryk university, Czechia Relevance & Research Question A Data-Driven Approach for Detecting Speeding Behavior in Online Surveys Robert Koch Institute, Germany Relevance & Research Question Online surveys provide a great opportunity for researchers to collect response times alongside the participants’ answers. Extremely short response times—known as speeding—can indicate careless responding. Previous studies often identified speeding behavior using fixed cutoffs, for example those derived from average reading speeds reported in the literature. Although empirically motivated, these thresholds overlook other important cognitive demands of the questions and differences between respondents. This study introduces a probabilistic mixture modeling approach to identify speeding behavior in the “Health in Germany” probability-based panel of the Robert Koch Institute (RKI). We validate this approach by comparing its classifications against established methods and by analyzing correlations with other indicators of data quality. Methods & Data Response times from the CAWI participants of the 2024 regular panel (N~30.000) wave were analyzed using a shifted lognormal–uniform mixture model. As is standard practice in response-time analysis, the lognormal component represents regular, attention-based responses. The uniform component captures implausibly short response times (“speeding”). The shift parameter models the minimal realistic answering time. Hierarchical model specifications allow for variation across survey items and respondents. Fixed effects allow to estimate how model features such as speeding likelihood and minimal attention-based answering time vary as function of characteristics the participant (e.g., age) and the item (e.g., number of words). Results Preliminary results show that the model accurately reproduces the empirical distribution of response times and allows for the calculation of speeding probabilities per response, participant and question. Speeding probabilities vary substantially across participants, suggesting that individual differences are the dominant source of speeding behavior, while item-level differences are smaller but still substantial. Further analyses, to be completed before the conference, will examine how participant and item characteristics correlate with model parameters, how speeding behavior correlates with other indicators of data quality, and how the model performs in out-of-sample prediction. This study demonstrates the practical value of response-time modeling for improving data quality diagnostics in online panels. By quantifying speeding probabilities instead of applying fixed cutoffs, the method supports more data-driven cleaning and better understanding of response behavior.
| ||