Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
6.1: Data quality and measurement error II
| ||
| Presentations | ||
Assessing Trends in Turnout Bias in Social Science Surveys: Evidence from the European Social Survey and German Survey Programs 1GESIS - Leibniz Institute for the Social Sciences, Germany; 2University of Mannheim Relevance & Research Question Social science surveys frequently overestimate voter turnout due to measurement and nonresponse errors, which undermine the validity of research on the causes and consequences of political disengagement. As turnout bias may differ across countries and over time, both cross-national and longitudinal comparisons are challenged. Despite these concerns, there is no comprehensive longitudinal and cross-national comparison of turnout bias. Consequently, it remains unclear to what extent turnout bias is shaped by contextual factors or by survey design. To close this gap, we examine (1) the prevalence and development, (2) the contextual factors and (3) the survey design features associated with turnout bias in European social science surveys since 2000. Methods & Data We analyze data from the European Social Survey (ESS) and a unique data set of German Survey Programs (GSP) conducted between 2000 and 2023. First, we run separate OLS regression models using either absolute or relative turnout bias as the dependent variable and the year of data collection as the independent variable for each country in the ESS and for each survey program in the GSP. Second, we estimate fixed effects and mixed effects models using absolute or relative turnout bias in the ESS and GSP as the dependent variable. As independent variables, we include contextual factors. Third, we add variables capturing variations in survey design as independent variables. Results Our findings reveal that the extent of turnout bias varies between countries and has increased over time, posing a significant challenge for both cross-national and longitudinal research. We identify several survey design features that could mitigate turnout bias which are in line with previous literature. Moreover, we discuss methodological innovations aimed at reducing turnout bias by targeting nonvoters before or during data collection through tailored survey designs. Added Value The persistence of measurement and nonresponse errors, along with the lack of validated turnout data, are a constraint for social science surveys. This study offers the first comprehensive longitudinal and cross-national comparison of turnout bias. Our results underscore the urgent need for methodological innovations to ensure the validity and comparability of data on political disengagement. Validating a 6-Item Scale for Measuring Perceived Response Burden in Establishment Surveys 1IAB, Germany; 2IAB, Germany; University of Munich, Germany Relevance & Research Question Response burden is a significant challenge in establishment surveys, threatening data quality and survey participation. However, the field lacks validated instruments to measure perceived response burden. This study addresses this gap by developing and validating a 6-item, binary (Yes/No) response burden scale. Our central research question is whether this scale achieves measurement equivalence across different levels of objective burden (questionnaire length) and stability over time (longitudinally). Methods & Data We utilize data from an experiment embedded within three quarterly follow-up waves (2023-2024) of the IAB Job Vacancy Survey (IAB-JVS), a large-scale German establishment survey. Establishments (n=3,888) were randomly assigned to receive either a short (2-Page) or a longer (4-Page) follow-up questionnaire. We test for measurement invariance using multi-group Confirmatory Factor Analysis (CFA) adapted for binary indicators (WLSMV estimator), following Wu and Estabrook (2016). We assess both cross-sectional invariance (between experimental groups) and longitudinal invariance (across the three waves). Results The scale demonstrates strong construct validity: respondents in the 4-page condition reported significantly higher perceived burden across all items (e.g., "High number of questions"). The analysis confirms full scalar invariance across the 2-page and 4-page experimental groups in each wave (e.g., Q1: ΔCFI < 0.01). This indicates the scale measures the same latent construct equivalently regardless of objective burden. Furthermore, the scale achieved full longitudinal scalar invariance across the three waves, demonstrating its temporal stability even as quarterly questionnaire content changed. Added Value This study provides practicioners with a validated, concise instrument to monitor perceived burden in establishment surveys, Based on the confirmation of cross-sectional and longitudinal scalar invariance, researchers can now confidently use this scale to track burden trends over time and accurately evaluate the impact of questionnaire design interventions. Hopefully, our work provides a reliable tool for comparative analysis, supporting efforts to improve data quality and respondent engagement. The effects of panel conditioning on response behavior across different cohorts: Bias in the Core Discussion Network University of Mannheim, Germany Research Question Panel conditioning: changes in response behavior caused by repeated survey participation, is a central methodological concern in online panels. Research has identified both positive and negative conditioning effects, but little is known about how these processes unfold in egocentric social network surveys, where name-generator items create opportunities for satisficing. In this study I ask: (1) How does repeated participation affect the likelihood of motivated misreporting in these filter questions? (2) To what extent is this relationship mediated by respondents’ reported network size, that is, the number of alters named in the generator? Methods I use data from the 12th wave of the online probability-based LISS panel, drawing on the Core Discussion Network module, which includes a name generator and follow-up questions on alter characteristics. Panel experience operates as the independent variable, and Motivated misreporting in filter questions as the dependent variable, while network size serves as the mediator. I estimate causal mediation models using Poisson and logistic regression with 5,000 bootstrap resamples and control for sociodemographic and survey-evaluation variables associated with panel attrition. Results The results reveal two opposing mechanisms: a direct and indirect effect. Indirectly, respondents with greater survey experience report larger discussion networks, which increases their likelihood of misreporting in the filter questions, to avoid the tie-strength assessments. Directly, however, more experienced participants are less likely to engage in motivated misreporting when network size is held constant, suggesting reduced satisficing due to increased familiarity with online survey tasks. Because these pathways counteract each other, the total effect of panel experience on misreporting is small and statistically nonsignificant. Added Value This study demonstrates that panel conditioning in online surveys operates through simultaneous, opposing mechanisms that remain hidden to conventional response-quality diagnostics. The findings highlight the need to consider how question order, task burden, and instrument structure interact with respondent experience in modules involving name generators. By applying mediation analysis, the study provides a framework for detecting hidden behavioral mechanisms and offers practical guidance for improving the design and interpretation of longitudinal online surveys. | ||