GOR 26 - Annual Conference & Workshops
Annual Conference- Rheinische Hochschule Cologne, Campus Vogelsanger Straße
26 - 27 February 2026
GOR Workshops - GESIS - Leibniz-Institut für Sozialwissenschaften in Cologne
25 February 2026
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
4.3: Poster Session
| ||
| Presentations | ||
Different Measures, Different Conclusions? Evaluating Operationalizations of Non-Optimal Response Behavior 1University of Mannheim, Germany; 2GESIS – Leibniz Institute for the Social Sciences, Germany Relevance & Research Question Do Audio Buttons Matter? Evidence from a Web-Based Panel Recruitment Survey GESIS – Leibniz Institute for the Social Sciences, Germany Relevance & Research Question Ensuring accessibility and response quality across diverse respondent groups is crucial when recruiting participants for mixed-mode probability panels. Audio buttons—short spoken renditions of survey questions—are a potential tool to support respondents with low literacy skills, visual impairments, or other limitations that may exist in reading survey questions. Yet their effectiveness and actual uptake remain understudied. This paper examines the extent to which audio buttons are used in a web-based recruitment survey for a mixed-mode panel refreshment and assesses whether their provision influences respondent behavior, consent to panel participation, and sociodemographic differences in usage. Methods & Data We analyze experimental data from the FReDA Recruitment Survey 2024, in which web respondents were randomly assigned to a questionnaire with audio buttons available for all questions (n=7,490) or to a version without audio buttons (n=7,406). In the experimental group, the questionnaire contained 149 audio buttons. On average, respondents took 19.4 minutes to complete the survey. Results Overall uptake of audio buttons was low: 16.6% of respondents used an audio button at least once, and most activated them only once or twice. A smaller group (5.7%) used an audio button three or more times. Usage peaked at the first question (around 6%), while most other questions showed activation rates between 1–2%. The availability of audio buttons did not affect consent to join the panel. Initial analyses indicate that men, younger respondents, individuals born abroad or without German citizenship, those with lower educational attainment, and smartphone users were significantly more likely to use audio buttons. Respondents who activated audio buttons also spent more time completing the survey. The potential impact of audio-button use on response quality is currently being examined. Added Value Do respondents in a cross-sectional probability survey donate their Spotify or Google Search data? – Short answer: No GESIS Leibniz Institute for the Social Sciences, Germany Relevance & Research Question Data donation offers valuable insights into digital trace data and online behavior, yet little is known about the willingness of respondents in probability-based – especially cross-sectional – surveys to participate. This study examines the feasibility of implementing data donation in a cross-sectional population survey and whether participation differs between the request of Google Search or Spotify data, assuming varying levels of perceived data sensitivity. Methods & Data Results Donation flows show a substantial drop-off between initial interest and completed data donations. While willingness did not differ between the Google and Spotify groups – 350 individuals accessed the study and 232 consented after learning which platform data would be requested – actual donations were rare: 18 in the Spotify sample and 12 in the Google sample. Preliminary post-donation survey results indicate platform-specific sensitivity perceptions: streaming services like Netflix are seen as relatively non-sensitive, whereas social media data are viewed as more private. Telegram stands out as particularly sensitive, with very low stated willingness to donate. Linking these findings with ISSP survey data (forthcoming) may indicate whether such patterns relate to political or ideological differences. At GOR, we will present further analyses of willingness to participate by attitudes towards digital tech and privacy concerns. Added Value How Processing Decisions Influence the Measurement of News Consumption with Web-Tracking Data University of Mannheim, Germany Relevance & Research Question Web-tracking technology is increasingly adopted in social science research to study online behavior because of its potential capability to collect granular individual-level data in-situ and unobtrusively. Although web-tracking is a promising approach for accurately measuring online behavior, recent research has highlighted processing error as an additional error source. This line of research demonstrated that plausible and defensible choices across various processing procedures can transform raw web-tracking data differently. However, previous research has examined the effects of these processing choices on the resulting data in isolation. To date, no study has systematically examined how processing decisions may jointly transform web-tracking data differently. Addressing this research gap is important, because a series of data processing procedures is often required to convert raw web-tracking data into the measure of interest. Using online news consumption as an exemplary variable, I ask the research question: Do different combinations of web-tracking data processing decisions lead to different distributions of online news consumption? Methods & Data To address this research question, I will conduct a multiverse analysis that systematically examines the distribution of online news consumption generated from a sequence of reasonable data-processing procedures. The analysis will be based on the PINCET dataset (Bach et al., 2023) that contains multi-wave survey data and web-tracking data from German adults between July to December 2021 on both PCs (N= 1,863) and mobile devices (N= 1,708). Results Building on Clemm von Hohenberg et al. (2024), I identify a five-step processing pipeline, alongside the corresponding reasonable processing options at each decision point to transform the raw web-tracking data into measures of online news consumption: 1) defining visit duration (5 options), 2) de-duplication (4 options), 3) classification of URLs (3 options), 4) handling missing data (3 options), 5) operationalisation of news media consumption (2 options). This results in a multiverse of 5x4x3x3x2= 360 datasets. Correlations between all online news consumption variables across the dataset multiverse will be computed. Added Value This paper aims to enhance transparency of research using digital behavioral data by providing empirical evidence on how researcher degrees of freedom affects the measurement of web-tracking data. The Instagram Reality Check: Measuring the Accuracy of Self-Reported Social Media Behavior. MZES - University of Mannheim, Germany Relevance & Research Question: Prior research investigating the accuracy of self-reported online behavior has focused primarily on general usage measures, like the usage frequency of a social media platform, often neglecting the self-reported accuracy of specific activities (e.g., posting, commenting, or liking). Self-reporting accuracy may differ substantially between general usage and specific behaviors, yet this distinction remains unexplored in survey research. Filling this gap, we compare self-reports against actual behavior to examine the accuracy of self-reports for both general usage frequency and specific behaviors. Methods & Data: We test how respondents’ individual characteristics and Instagram usage influence misreporting and how question framing influences reporting accuracy through a 2×2 survey experiment that varies the reference period (last week vs. typical week) and the response scale (numeric vs. vague quantifier labels). Our pre-registered analysis draws on data from over 400 participants from a probability-based German online panel. Participants self-reported their Instagram use in a web survey and provided actual usage data through data donation. Results: Data collection finished at the end of November 2025, and we will provide our results by January 2026. We hypothesize that respondents underreport their general platform usage frequency and that misreporting will also occur for specific behaviors (e.g., posting, commenting, or liking). We further hypothesize that the accuracy of self-reports will vary depending on platform usage frequency and engagement in specific behaviors. Finally, we predict that questions regarding typical week will yield more accurate estimates than those regarding last week, and that vague quantifier labels will perform at least as well as numeric labels. Added Value: Our study replicates previous findings and extends them to specific online behavior. First, we help researchers assess the validity of self-reported online behaviors. Second, through our survey experiment testing different reference periods, we offer insights into how to accurately inquire about specific online behaviors. Third, we illustrate the potential of data donation to gather fine-grained data on individual behaviors that participants might be unable to report accurately.
| ||