Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
4.2: Poster Session
Time:
Tuesday, 01/Apr/2025:
2:30pm - 3:30pm

Location: Foyer OG


Show help for 'Increase or decrease the abstract text size'
Presentations

Everybody does it sometimes: reducing Social Desirability Bias in an online survey using face-saving strategies

Emma Zaal, Yfke Ongena, John Hoeks

University of Groningen, Netherlands, The

Relevance & Research Question
Online surveys are indispensable for measuring human behaviors and cognitions. However, Social Desirability Bias (SDB) - the tendency to present oneself in a favorable light - poses challenges for survey research addressing sensitive topics. Attempting to reduce SDB, we conducted an online survey experiment in which we employed so-called face-saving strategies. Such strategies aim to ease the discomfort of admitting socially undesirable behaviors by reassuring respondents that deviating from normative behaviors can be unavoidable. Still, it is unclear what mechanisms exactly drive the effectiveness of face-saving strategies and under what circumstances the method is most effective. This leads to our research question: “To what extent can SDB be reduced in an online survey integrating face-saving strategies using various operationalizations and question formats?”

Methods & Data
This online experiment was conducted with participants recruited door-to-door using flyers with QR-codes, in two neighborhoods in Groningen, the Netherlands (N=529). The survey included questions on volunteering, cohesion, disturbances, safety, and poverty. Our experimental conditions included face-saving formulations in questions and/or answers and were compared to control conditions with conventional formulations. For instance, we offered a face-saving preamble that employed downtowning (weakening a social norm, “It can sometimes be challenging to [...] It’s also very normal to find this difficult.”). We also investigated face-saving answer options by incorporating degrees of truth (reflecting partial agreement, like “occasionally”) and credentialing (justifying norm-violations, like “no, but I don’t have time”).

Results
Compared to a yes/no format, face-saving answer options significantly increased the likelihood of socially undesirable responding, suggesting a reduction in SDB. A face-saving preamble did not seem to affect SDB. This poster presentation will offer results based on other conditions that were tested as well, providing insights into the effectiveness of face-saving in different question formats and formulations.
Added Value
We examined novel operationalizations of face-saving strategies across various question formats, advancing our understanding of SDB-reduction and effective survey design. By refining and expanding the application of face-saving strategies, researchers and practitioners can ultimately enhance data quality significantly. This is in turn essential for informing evidence-based policies and interventions aimed at addressing societal issues.



Decoding Straightlining: The Role of Question Characteristics in Satisficing Response Behavior

Cagla Ezgi Yildiz1, Henning Silber2, Jessica Daikeler1, Fabienne Kraemer1, Evgenia Kapousouz3

1GESIS - Leibniz Institute for the Social Sciences, Germany; 2University of Michigan; 3NORC at the University Of Chicago

Relevance & Research Question

Satisficing response behavior, including straightlining, can threaten the reliability and validity of survey data. Straightlining refers to selecting identical (or nearly identical) response options across multiple items within a question, potentially compromising data quality. While straightlining has often been interpreted as a sign of low-quality responses, there is a need to distinguish between plausible and implausible straightlining (see Schonlau and Toepoel, 2015; Reuning and Plutzer, 2020). With this research, we introduce a model that classifies straightlining into plausible and implausible patterns, offering a more nuanced understanding of the conditions under which straightlining likely indicates optimized response behavior (plausible straightlining) vs. satisficing response behavior (implausible straightlining). For instance, straightlining behavior is plausible when answering attitudinal questions with items worded in the same direction, but it becomes implausible when items are reverse-worded. This study further examines how question characteristics—including grid size, design (e.g., matrix vs. single-item formats), straightlining plausibility—influence straightlining behavior.

Methods & Data

For our analyses, we use the German GESIS Panel, a mixed-mode (mail and online), probability-based panel study, leveraging a change in the panel’s layout strategy in 2020 that shifted multi-item questions from matrix to single-item designs, offering a unique quasi-experimental set-up. We conduct multilevel logistic regression analyses to assess the effect of question design and grid size on straightlining behavior. We further conduct difference-in-difference analyses to examine the format change's effect on plausible and implausible straightlining.

Results

Our initial multilevel regression analyses, using data from 3514 respondents and 18 grid questions from the wave before and after the design switch, show that matrix designs are associated with higher levels of straightlining compared to single-item designs. Our preliminary analyses, based on coding by five survey methodology experts, classify 22.2% of these questions as exhibiting plausible straightlining, with the remainder showing implausible patterns. Further analyses investigate how these classifications correspond to conditions under which straightlining reflects optimized versus satisficing response behavior, offering deeper insights into the role of question characteristics.
Added Value

This research enhances questionnaire design and the accurate identification of low-quality responses, addressing gaps in linking question characteristics to straightlining plausibility.



Political Communication on Social Media: Analysis of Strategies in the Bavarian State Election Campaign 2023

Jakob Berg

University of Regensburg, Germany

Relevance & Research Question
This research explores political communication on Instagram during the 2023 Bavarian state election campaign, analyzing if and how candidates’ strategies influence reach and engagement. A model categorizes key strategies, including self-presentation, party representation, thematic content, negative campaigning, and direct calls to vote. The study investigates which strategies drive Bavarian audience interaction and how Instagram serves as a platform for political mobilization.
Methods & Data
The study is based on 46,897 Instagram posts from 619 candidates, analyzed using advanced machine learning techniques. OCR was applied to extract embedded text in images, while BERTopic identified thematic patterns. A prompt-guided classification with GPT-4o-mini enabled a nuanced categorization of posts according to predefined communication strategies. Evaluation metrics showed high accuracy, ensuring the scalability and reliability of the methodology.
Results

The research confirms Instagram’s importance as a platform for political outreach also in Bavaria, with high interaction rates and active participation from candidates across relevant parties. While all identified communication motives were present, their frequency and effectiveness varied. Posts focusing on personal self-presentation and political content, such as policy issues and party-related topics, were dominant. In contrast, fundraising and internal communication were rare, reflecting their lower relevance in this electoral context.

Regression analysis revealed that personal self-presentation significantly boosted engagement, aligning with findings that informal content fosters relatability and connection. Negative campaigning also had a slight positive effect, capturing audience interest in the Bavarian election context. Posts focused on key policy issues and thematic content, while frequent, had little impact on engagement rates. This supports findings in social media research indicating that personal and informal content resonates strongly with audiences, likely because it promotes a sense of connection and relatability.
Added Value

The findings also suggest that the identified communication strategies alone are insufficient to explain the varying success of political posts in terms of engagement rates and reach. It appears that other post characteristics - such as technical factors (e.g., video format or interactive elements), emotional triggers, or the offline popularity of candidates - play a more significant role in driving engagement on Instagram



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 25
Conference Software: ConfTool Pro 2.8.105
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany