GOR 26 - Annual Conference & Workshops
Annual Conference- Rheinische Hochschule Cologne, Campus Vogelsanger Straße
26 - 27 February 2026
GOR Workshops - GESIS - Leibniz-Institut für Sozialwissenschaften in Cologne
25 February 2026
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
5.1: Data quality and measurement error I
| ||
| Presentations | ||
I misbehave, but only once in a while: how face-saving strategies can reduce socially desirable responding in online survey research University of Groningen, Netherlands, The Relevance & Research Question ‘And Yet…’: The Effectiveness of Probing Questions in Reducing Item Nonresponse to Financial Questions NRU HSE, Russian Federation Relevance & Research Question: High rates of item nonresponse to questions on income and expenditures compromise data quality, leading to sampling bias and limiting the generalizability of findings. This study investigates the effectiveness of follow-up (probing) questions in reducing nonresponse to financial questions within a Russian context and identifies the profiles of non-respondents. Methods & Data: Using data from the 6th wave of the HSE University's "Economic Behavior of Households" survey (N=6000), the analysis employs a two-stage approach combining the Random Forest method with the Boruta algorithm and logistic regressions. Results: Results indicate that probing questions successfully converted 36-41% of initial nonresponses into substantive answers, decreasing the overall nonresponse rate from 6-17% to 4-10%. Key predictors of nonresponse were found to be unawareness of household expenditures (a 3- to 5-fold increase in odds), poor health, lack of savings, and residence in small towns. The technique proved most effective for respondents with lower education levels, no savings, and those from small towns. Added Value: The findings demonstrate that while probing is a valuable tool, its primary mechanism is reducing cognitive complexity rather than mitigating question sensitivity. Based on this evidence, the paper offers practical recommendations for improving the design of surveys that include financial questions.
Having ACES Up Your Sleeve: Developing and Validating Attention Checks Embedded Subtly (ACES) to Improve Identification of Inattentive Participants Institute of Philosophy and Sociology of the Polish Academy of Sciences, Poland Relevance & Research Question: Careless or insufficient-effort responding (C/IER) is a major threat to data quality in online surveys. Existing detection approaches face substantial limitations: indicator-based methods (e.g., straightlining indices) require subjective threshold-setting, while model-based approaches rely on strong and often unrealistic assumptions and are difficult to implement in applied research. Attention checks offer an objective-to-score and easy-to-use alternative. However, traditional attention checks, such as instructed-response items (Please select strongly agree) or bogus items (Orange is a fruit), suffer from limited validity, and high respondent reactivity (Daikeler et al., 2024; Gummer et al., 2021; Silber et al., 2022). This project addresses the lack of attention checks that would mimic ordinary questionnaire items to limit reactivity while reliably identifying inattentive respondents. The central research question is: How can we design attention checks that outperform existing approaches in validity and non-reactivity? Methods & Data: First, 460 candidate ACES were developed across diverse domains (e.g., personality, technology, political attitudes), drawing on the concept of frequency/infrequency items (Kay & Saucier, 2023), and tested on 1498 respondents. Items were evaluated using distributional properties and socio-demographic invariance. The selected ACES were validated in a between-subjects experiment (N = 880) comparing four conditions: (1) ACES, (2) IRIs, (3) bogus items, and (4) a no-check control group. The questionnaire included measures of perceived clarity and seriousness, as well as open-ended feedback on attention checks. The findings were replicated in another survey (N = 1113) targeting less experienced online panel members. All studies used non-probability quota samples (age, gender, education) from Polish online research panels. Results: ACES showed stronger associations with independent indicators of inattentiveness (e.g., response times, straightlining indices) and demonstrated higher classification accuracy (careless-not careless) than IRIs and bogus items. Respondents generally did not recognize ACES as attention checks, despite being highly familiar with traditional checks. Added Value: This project delivers the first validated ACES set and provides empirical evidence that ACES improve detection of C/IER in comparison to other attention checks while minimizing respondent reactivity. This enables using ACES in probability samples or interviewer-based surveys where traditional attention checks were not used due to their coarseness and potential for reactivity. | ||