Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Only Sessions at Location/Venue 
 
 
Session Overview
Location: Seminar 1 (Room 1.01)
Rheinische Fachhochschule Köln Campus Vogelsanger Straße Vogelsanger Str. 295 50825 Cologne Germany
Date: Thursday, 22/Feb/2024
10:45am - 11:45amA1: Survey Methods Interventions 1
Location: Seminar 1 (Room 1.01)
Session Chair: Almuth Lietz, Deutsches Zentrum für Integrations- und Migrationsforschung (DeZIM), Germany
 

Providing Appreciative Feedback to Optimizing Respondents – Is Positive Feedback in Web Surveys Effective in Preventing Non-differentiation and Speeding?

Marek Fuchs, Anke Metzler

Technical University of Darmstadt, Germany

Relevance & Research Question

Interactive feedback to non-differentiating or speeding respondents has proven effective in reducing satisficing behavior in Web surveys (Couper et al. 2017; Kunz & Fuchs 2019). In this study we tested the effectiveness of appreciative dynamic feedback to respondents who already provide well-differentiated answers and who already take sufficient time in a grid question. This feedback was expected to elevate overall response quality by motivating optimizing respondents to keep response quality high.

Methods & Data

About N=1,900 respondents from an online access panel in Germany participated in a general population survey on “Democracy and Politics in Germany”. In this study, two 12-item grid questions were selected for randomized field-experiments. Respondents were assigned to either a control group with no feedback, or to experimental group 1 receiving feedback when providing non-differentiated (experiment 1) or fast answers (experiment 2) or to experimental group 2 receiving appreciative feedback when providing well differentiated answers (experiment 1) or when taking sufficient time to answer (experiment 2). Interventions were implemented as dynamic feedback appearing as embedded text bubbles on the question page up to four times and disappearing automatically.

Results
Results concerning non-differentiation confirm previous findings according to which dynamic feedback leads to overall higher degrees of differentiation. By contrast, appreciative feedback to well differentiating respondents seems to be effective in maintaining the degree of differentiation only for respondents with particular long response times. Dynamic feedback to speeders seems to reduce the percentage of speeders and increase the percentage of respondents exhibiting moderate response times. By contrast, appreciative feedback to slow respondents exhibits a contra-intuitive effect and results in significantly fewer respondents with long response times and yields shorter overall response times.
Added Value

Results suggest that appreciative feedback to optimizing respondents has only limited positive effects on response quality. By contrast, we see indications of deteriorating effects when praising optimizing respondents for their efforts. We speculate that appreciative feedback to optimizing respondents is perceived as an indication that they process the question more careful than necessary.



Comparing various types of attention checks in web-based questionnaires: Experimental evidence from the German Internet Panel and the Swedish Citizen Panel

Joss Roßmann1, Sebastian Lundmark2, Henning Silber1, Tobias Gummer1

1GESIS - Leibniz Institute for the Social Sciences, Germany; 2SOM Institute, University of Gothenburg, Sweden

Relevance & Research Question
Survey research relies on respondents’ cooperation during interviews. Consequently, researchers have begun measuring respondents’ attentiveness to control for attention levels in their analyses (e.g., Berinsky et al., 2016). While various attentiveness measures have been suggested, there is limited experimental evidence comparing different types of attention checks with regard to their failure rates. A second issue that received little attention is false positives when implementing attentiveness checks (Curran & Hauser, 2019). Some respondents are aware that their attentiveness is being measured and decide not to comply with the instructions in the attention measurement, leading to incorrect identification of inattentiveness.
Methods & Data
To address these research gaps, we randomly assigned respondents to different types of attentiveness measures within the German Internet Panel (GIP), a probability-based online panel survey (N=2900), and the non-probability online part of the Swedish Citizen Panel (SCP; N=3800). Data were collected in the summer and winter of 2022. The attentiveness measures included instructional manipulation checks (IMC), instructed response items (IRI), bogus items, numeric counting tasks, and seriousness checks, which varied in difficulty and the effort required to pass the task. In the GIP study, respondents were randomly assigned to one of four attention measures and then reported whether they purposefully complied with the instructions or not. The SCP study replicated and extended the GIP study in that respondents were randomly assigned to one early and one late attentiveness measure. The SCP study also featured questions about attitudes toward and comprehension of attentiveness measures.
Results
Preliminary results show that failure rates varied strongly across the different attentiveness measures, and that failure rates were similar in both the GIP and SCP. Low failure rates for most types of attention checks suggest that respondents were generally attentive. The comparatively high failure rates for IMC/IRI type attention checks can be attributed to their high difficulty, serious issues with their design, and purposeful non-compliance with the instructions.
Added Value
We conclude by critically evaluating the potential of different types of attentiveness measures to improve response quality of web-based questionnaires and pointing out directions for their further development.



Evaluating methods to prevent and detect inattentive respondents in web surveys

Lukas Olbrich1,2, Joseph W. Sakshaug1,2,3, Eric Lewandowski4

1Institute for Employment Research (IAB), Germany; 2LMU Munich; 3University of Mannheim; 4NYU

Relevance & Research Question

Inattentive respondents pose a substantial threat to data quality in web surveys. In this study, we evaluate methods for preventing and detecting inattentive responding and investigate its impacts on substantive research.
Methods & Data

We use data from two large-scale non-probability surveys fielded in the US. Our analysis consists of four parts: First, we experimentally test the effect of asking respondents to commit to providing high-quality responses at the beginning of the survey on various data quality measures (attention checks, item nonresponse, break-offs, straightlining, speeding). Second, we conducted and additional experiment to compare the proportion of flagged respondents for two versions of an attention check item (instructing them to select a specific response vs. leaving the item blank). Third, we propose a timestamp-based cluster analysis approach that identifies clusters of respondents who exhibit different speeding behaviors and in particular likely inattentive respondents. Fourth, we investigate the impact of inattentive respondents on univariate, regression, and experimental analyses.
Results

First, our findings show that the commitment pledge had no effect on the data quality measures. As indicated by the timestamp data, many respondents likely did not even read the commitment pledge text. Second, instructing respondents to leave the item blank instead of providing a specific response significantly increased the rate of flagged respondents (by 16.8 percentage points). Third, the timestamp-based clustering approach efficiently identified clusters of likely inattentive respondents and outperformed a related method, while providing additional insights on speeding behavior throughout the questionnaire. Fourth, we show that inattentive respondents can have substantial impacts on substantive analyses.

Added Value

The results of our study may guide researchers who want to prevent or detect inattentive responding in their data. Our findings show that attention checks should be used with caution. We show that paradata-based detection techniques provide a viable alternative while putting no additional burden on respondents.

 
12:00pm - 1:15pmA2: Mixing Survey Modes
Location: Seminar 1 (Room 1.01)
Session Chair: Jessica Daikeler, GESIS, Germany
 

Navigating the Digital Shift: Integrating Web in IAB (Panel) Surveys

Mackeben Jan

Institut für Arbeitsmarkt- und Berufsforschung, Germany

Relevance & Research Question

In the realm of social and labor market research, a noteworthy transformation has unfolded over the past few years, marking a departure from conventional survey methods. Traditionally, surveys were predominantly conducted through telephone interviews or face-to-face interactions. These methods, while effective, were time-consuming and resource-intensive. However, with the rapid advancement of technology, there has been a significant paradigm shift towards utilizing online modes for data collection.

The emergence of the web mode has revolutionized the landscape of surveys, offering a more efficient and cost-effective means of gathering information. Online surveys provide researchers with a broader reach, enabling them to engage with diverse populations across geographical boundaries. Moreover, the convenience and accessibility of web-based surveys have contributed to increased respondent participation.

As we navigate the digital age, the web mode has become increasingly integral in shaping the methodologies of social and labor market research. Its versatility, speed, and ability to cater to a global audience underscore its growing importance in ensuring the accuracy and comprehensiveness of data collection in these vital fields.

Methods & Data

In this paper, we focus on the largest panel surveys conducted by the Institute for Employment Research. These include the Panel Labor Market and Social Security (PASS), the IAB Establishment Panel (IAB-EP), the Linked Personnel Panel (LPP), consisting of both employer and employee surveys, and the IAB Job Vacancy Survey. Historically, all these surveys employed traditional data collection methods. However, in recent years, they all have undergone a transition by incorporating or testing the inclusion of the web mode.
Results
In the presentation, I will provide an update on each survey's current status, illustrating how the web mode has been integrated and examining its impact on response rates and sample composition.
Added Value

The incorporation of the web mode in key Institute for Employment Research panel surveys is crucial in the digital age. This transition enhances efficiency, reduces costs, and broadens participant diversity, ensuring studies remain methodologically robust and adaptable to the evolving digital landscape.



Effect of Incentives in a mixed-mode Survey of Movers

Manuela Schmidt

University of Bonn, Germany

Relevance & Research Question

The use of incentives to reduce unit nonresponse in surveys is an established and effective practice. Prepaid incentives have been shown to increase participation rates, especially for postal surveys. As surveys keep moving online and response rates keep dropping, the use of incentives and its differential effect on survey modes need to be further investigated.

In our experiment, we investigate both the effects of survey mode and incentives for participation rates in a postal/web mixed-mode survey. In particular, we aim to answer the questions:

i) In which sociodemographic groups do incentives work (particularly well)?

ii) Is the effect of incentives affected by survey mode?

iii) How does data quality differ between incentivized and non-incentivized participants?

Methods & Data

Our data is based on a random sample of all residents who moved from two neighborhoods of Cologne, Germany, between 2018 and 2022. Addresses were provided by the city's Office for Urban Development and Statistics. We were also provided with the age and gender of all selected residents as reported on their official registration.

For the experiment, we randomly selected 3000 persons. Of those, 2000 received postal invitations to a web survey, while 1000 received a paper questionnaire with the option to participate online. In both groups, 500 participants were randomly selected to receive a prepaid incentive of 5 euros cash with the postal invitation.

Results

Our design yielded a good response rate of around 35% overall (47% with incentives and 26% without). Over 80% participated in the online mode. As we have information on the age and gender of the whole sample, including non-responders, detailed analyses on the effectiveness of incentives and their possible effect on data quality (measured by the share of “non-substantive” answers, response styles, and the amount of information provided in open-ended questions), will be presented.

Added Value

With this paper, we contribute to the literature on the effect of incentives, particularly on the comparison of survey modes. As our data is based on official registration and we have reliable information on non-responders, our results on the effects of incentives are of high quality.



Mode Matters Most, Or Does It? Investigating Mode Effects in Factorial Survey Experiments

Sophie Katharina Hensgen1, Alexander Patzina2, Joe Sakshaug1,3

1Institute for Employment Research, Germany; 2University of Bamberg, Germany; 3Ludwig-Maximilians University Munich, Germany

Relevance & Research Question

Factorial survey experiments (FSEs), such as vignettes, have increased in popularity as they have proven to be of great advantage when collecting opinions on sensitive topics. Generally, FSEs are conducted via self-administered interviews in order to allow the participant to understand and assess the given scenario entirely. However, many establishment panels, such as the BeCovid establishment panel in Germany, rely on interviewer-administered data collection (e.g. telephone interviews), but could also benefit from using FSEs when interested in collecting opinions on more sensitive topics. Thus, the question emerges whether FSEs conducted via telephone result in similar results compared to web-based interviews. Furthermore, it would be of great interest to know whether these modes differ in their answer behavior for FSEs, such as straightlining, extreme responding or item non-response.

Methods & Data

To shed light on this issue, a mode experiment was conducted in the BeCovid panel in which a random subset of telephone respondents was assigned to complete a vignette module online (versus continuing with the telephone mode). Respondents were given a set of 4 vignettes varying in six dimensions followed by two follow-up questions regarding the person’s success in the application process. Additional to various descriptive analyses we run multilevel regressions (random intercept model) to take the multiple levels into account.

Results

The analysis shows no overall difference in the results of the random intercept model when controlling for the mode. However, there are significant differences between the modes regarding specific dimensions of the vignette, which could be described as sensitive. Furthermore, CATI shows an increase of straightlining as well as extreme responding, but no influence on the probability of acquiescence bias or central tendency bias. Lastly, respondents interviewed via telephone lead to more item non-response.

Added Value

This study shows that conducting FSEs through telephone interviews is feasible, but is associated with certain limitations. Depending on the subject matter, these interviews might fail to accurately capture genuine opinions, instead reflecting socially accepted responses. Additionally, they may result in diminished data quality due to satisficing and inattention.

 
3:45pm - 4:45pmA3.1: Solutions for Survey Nonresponse
Location: Seminar 1 (Room 1.01)
Session Chair: Oriol J. Bosch, University of Oxford, United Kingdom
 

Does detailed information on IT-literacy help to explain nonresponse and design nonresponse adjustment weights in a probability-based online panel?

Barbara Felderer1, Jessica Herzing2

1GESIS, Germany; 2University of Bern

Relevance & Research Question

The generalizability of inference from online panels is still challenged by the digital divide. Newer research concludes that not only individuals who do not have Internet access are under-represented in online panels but also those who do not feel IT-literate enough to participate which is potentially leading to nonresponse bias.

Weighting methods can be used to reduce bias from nonresponse if they include characteristics that are both correlated to nonresponse and the variable(s) of interest. In our study we assess the potential of asking nonrespondents about their IT-literacy in a nonresponse follow-up questionnaire on improving nonresponse weighting and reducing bias. Our research questions are:

1.) Does including information on IT-literacy collected in the recruitment survey improve nonresponse models for online panel participation compared to standard nonresponse models including socio-demographics only?

2.) Does including IT-literacy improve nonresponse adjustment?

Methods & Data

Data are collected in the 2018 recruitment of a refreshment sample of the probability-based German Internet Panel (GIP). Recruitment was conducted by sending invitation letters for the online panel by postal mail. Sampled individuals who were not willing or able to participate in the recruitment online were asked to fill in a paper-and-pencil questionnaire asking about their IT-literacy. The questionnaire was experimentally fielded in the first invitation or reminder mailings. The control group did not receive a paper questionnaire.

Results

We find IT-literacy to explain nonresponse to the GIP over and above the standard socio-demographic variables frequently used in nonresponse modeling. Nonresponse weights including measures of IT-literacy are able to reduce bias for variables of interest that are related to IT-literacy.

Added Value

Online surveys bear the risk of severe bias for any variables of interest that are connected to IT-literacy. Fielding a paper-and-pencil nonresponse follow-up survey asking about IT-literacy can help to improve nonresponse weights and reduce nonresponse bias.



Youth Nonresponse in the Understanding Society Survey: Investigating the Impact of Life Events

Camilla Salvatore, Peter Lugtig, Bella Struminskaya

Utrecht University, The Netherlands

Relevance & Research Question

Survey response rate are declining worldwide, particularly among young individuals. This trend is evident in both cross-sectional and longitudinal surveys, such as Understanding Society, where young people exhibit a higher likelihood of either missing waves or dropping out entirely.

This paper aims to explore why young individuals exhibit lower participation rates in Understanding Society. Specifically, we investigate the hypothesis that young people experience more life events such as a change in job, relationship status and a move of house, and it is the occurrence of such life events that are associated with a higher likelihood to not participate in the survey.

Methods & Data

The data source is Understanding Society, a mixed-mode probability-based general population panel study in the UK. We analyze individuals aged 18-44 at Understanding Society's Wave 1, and we follow them until Wave 12. We consider four age groups: 18-24 (youth), 25-31 (early adulthood), 32-38 (late adulthood) and 39-45 middle age (reference group for comparison). In order to study the effect of life events on attrition, we applied the Discrete-Time Multinomial Hazard Model. In this model the time is entered as a covariate and the outcome variable is the survey participation indicator (interview, noncontact, refusals or other). The outcome is modeled as a function of lagged covariates, including demographics, labor market participation, qualifications, household structure and characteristics, marital status and mobility, as well as binary indicators for life event-related status changes.
Results

Consistent with existing literature, our findings reveal that younger respondents, as well as those with an immigration background, lower education, and unemployment status, are less likely to participate. We also demonstrate that changes in job status and relocation contribute particularly to attrition, with age remaining a significant factor.
Added Value

As many household surveys are moving online to save costs, the findings of this study will offer valuable insights for survey organizations. This paper enriches our understanding of youth nonresponse and presents practical strategies for retaining them. This project is funded by the Understanding Society Research Data Fellowship.



Exploring incentive preferences in survey participation: How do socio-demographic factors and personal variables influence the choice of incentive?

Almuth Lietz, Jonas Köhler

Deutsches Zentrum für Integrations- und Migrationsforschung (DeZIM), Germany

Relevance & Research Question
Incentives for survey participants are commonly used to tackle declining response rates. It was shown that cash incentives are particularly effective in increasing response rates. However, the feasibility of cash incentives for publicly funded research institutions is not always guaranteed. As a result, other forms such as vouchers or bank transfers are often used. In our study, we aim to identify the extent to which socio-demographic and personal variables influence individuals' preference for either vouchers or bank transfers. In addition, we examine differences in preferences concerning specific vouchers from different providers.

Methods & Data
We draw on data from the DeZIM.panel - a randomly drawn, offline recruited online access panel in Germany with an oversampling of specific immigrant cohorts. Since 2022, regular panel operation has taken place with four waves per year, supplemented by quick surveys on current topics. So far 9 regular waves have already been carried out. Within the surveys, we offer compensation in form of a € 10 postpaid incentive. Respondents can choose between a voucher from Amazon, Zalando, Bücher.de, and a more sustainable provider called GoodBuy. Respondents can also provide us with their bank account details and we transfer the money.

Results
Analysis reveals that over half of the respondents who redeemed their vouchers chose an Amazon voucher and around 40 percent preferred to receive the money by bank transfer. Only a small proportion of 7 percent chose one of the other vouchers. This pattern can be seen across all waves. Initial results of logistic regressions show a significant preference for vouchers among those with higher net incomes. Additionally, we will examine participants who, despite not redeeming their incentive, continue to participate regularly in the survey.

Added Value
Understanding which incentives work best for which target group is of great relevance when planning surveys and finding an appropriate incentive strategy.

 
5:00pm - 6:00pmA4.1: Innovation in Interviewing & Coding
Location: Seminar 1 (Room 1.01)
Session Chair: Jessica Donzowa, Max Planck Institute für demographische Forschung, Germany
 

Exploring effects of life-like virtual interviewers on respondents’ answers in a smartphone survey

Jan Karem Höhne1,2, Frederick G. Conrad3, Cornelia Neuert4, Joshua Claassen1

1German Center for Higher Education Research and Science Studies (DZHW); 2Leibniz University Hannover; 3University of Michigan; 4GESIS - Leibniz Institute for the Social Sciences

Relevance & Research Question
Inexpensive and time-efficient web surveys have increasingly replaced survey interviews, especially conducted in person. Even well-known social surveys, such as the European Social Survey, follow this trend. However, web surveys suffer from low response rates and frequently struggle to assure that the data are of high quality. New advances in communication technology and artificial intelligence make it possible to introduce new approaches to web survey data collection. Building on these advances, we investigate web surveys in which questions are read through life-like virtual interviewers and in which respondents answer through selecting options from rating scales, incorporating features of in-person interviews in self-administered web surveys. This has the great potential to improve data quality through the creation of rapport and engagement. We address the following research question: Can we improve data quality in web surveys by programming life-like virtual interviewers reading questions aloud to respondents?
Methods & Data
For this purpose, we are currently conducting a smartphone survey (N ~ 2,000) in Germany in which respondents are randomly assigned to virtual interviewers that vary in gender (male or female) and clothing (casual or business casual) or a text-based control interface (without a virtual interviewer). We employ three questions on women’s role in the workplace and several questions for evaluating respondents’ experience with the virtual interviewers.
Results
We will examine satisficing behavior (e.g., primacy effects and speeding) and compare respondents’ evaluations of the different virtual interviewers. We will also examine the extent to which data quality may be harmed by socially desirable responding when the respondents’ gender and clothing preference match those of the virtual interviewer.
Added Value
By employing life-like virtual interviewers, researchers may be able to deploy web surveys that include the best of interviewer- and self-administered surveys. Thus, our study provides new impulses for improving data quality in web surveys.



API vs. human coder: Comparing the performance of speech-to-text transcription using voice answers from a smartphone survey

Jan Karem Höhne1,2, Timo Lenzner3

1German Center for Higher Education Research and Science Studies (DZHW); 2Leibniz University Hannover; 3GESIS - Leibniz Institute for the Social Sciences

Relevance & Research Question
New advances in information and communication technology, coupled with a steady increase in web survey participation through smartphones, provide new avenues for collecting answers from respondents. Specifically, the built-in microphones of smartphones allow survey researchers and practitioners collecting voice instead of text answers to open-ended questions. The emergence of automatic speech-to-text APIs transcribing voice answers into text pose a promising and efficient way to make voice answers accessible to text-as-data methods. Even though there are various studies indicating a high transcription performance of speech-to text APIs, these studies usually do not consider voice answers from smartphone surveys. We address the following research question: How do transcription APIs perform compared humans?
Methods & Data
In this study, we compare the performance of the Google Cloud Speech API and a human coder. We conducted a smartphone survey (N = 501) in the Forsa Omninet Panel in Germany in November 2021 including two open-ended questions with requests for voice answers. These two open questions were implemented to probe two questions from the modules “National Identity” and “Citizenship” of the German questionnaires of the International Social Survey Programme (ISSP) 2013/2014.
Results
The preliminary results indicate that human coder provides more accurate transcriptions than the Google Cloud Speech API. However, the API is much more cost- and time-efficient than the human coder. In what follows, we determine the error rate of the transcriptions for the API and distinguish between no errors, errors that do not affect the interpretability of the transcriptions (minor errors), and errors that affect the interpretability of the transcriptions (major errors). We also analyze the data with respect to error types, such as misspellings, word separation error, and word transcription error. Finally, we investigate the association between these transcription error forms and respondent characteristics, such as education and gender.
Added Value
Our study helps to evaluate the usefulness and usability of automatic speech-to-text transcription in the framework of smartphone surveys and provides empirical-driven guidelines for survey researchers and practitioners.



Can life-like virtual interviewers increase the response quality of open-ended questions?

Cornelia Neuert1, Jan Höhne2, Joshua Claaßen2

1GESIS Leibniz Institute for the Social Sciences, Germany; 2DZHW; Leibniz University Hannover

Relevance & Research Question

Open-ended questions in web surveys suffer from lower data quality compared to in-person interviews, resulting in the risk of not obtaining sufficient information to answer the research question. Emerging innovations in technology and artificial intelligence (AI) make it possible to enhance the survey experience for respondents and to get closer to face-to-face interactions in web surveys. Building on these innovations, we explore the use of life-like virtual interviewers as a design aspect in web surveys that might motivate respondents and thereby improve the quality of the responses.

We investigate the question of whether a virtual interviewer can help to increase the response quality of open-ended questions.

Methods & Data

In a between-subjects design, we randomly assign respondents to four virtual interviewers and a control group without an interviewer. The interviewers vary with regard to gender and visual appearance (smart casual vs. business casual). We compare respondents’ answers to two open-ended questions embedded in a smartphone web survey with participants of an online access panel in Germany (n=2,000).

Results

The web survey will run in November 2023. After data collection, we analyze responses to the open-ended questions based on various response quality indicators (i.e., probe nonresponse, number of words, number of topics, response times).

Added ValueThe study provides information on the value of implementing virtual interviewers in web surveys to improve respondents experience and data quality, particularly for open-ended questions.

 
Date: Friday, 23/Feb/2024
11:45am - 12:45pmA5.1: Recruiting Survey Participants
Location: Seminar 1 (Room 1.01)
Session Chair: Olga Maslovskaya, University of Southampton, United Kingdom
 

Recruiting online panel through face-to-face and push-to-web surveys.

Blanka Szeitl, Vera Messing, Ádám Stefkovics, Bence Ságvári

HUN-REN Centre for Social Sciences, Hungary

Relevance & Research Question: This presentation focuses on the difficulties and solutions related to recruiting web panels through probability-based face-to-face and push-to-web surveys. It also compares the panel composition when using two different survey modes for recruitment.

Methods & Data: As part of the ESS SUSTAIN-2 project, a webpanel was recruited in 2021/22 through a face-to-face survey of ESS R10 in 12 countries. Unfortunately, the recruitment rate was low and the sample size achieved in Hungary was inadequate for further analysis. To increase the size of the webpanel (CRONOS-2), the Hungarian team initiated a probability-based mixed-mode self-completion survey (push-to-web design). Respondents were sent a post inviting them to go online or complete a questionnaire, which was identical to the interviewer-assisted ESS R10 survey.

Results: We will present our findings on how the type of survey affects recruitment to a web panel through probability sampling. We will begin by introducing the design of the two surveys, then discuss the challenges encountered in setting up the panel, and finally compare the composition of the panel recruited through the two surveys (interviewer-assisted ESS R10 and push-to-web survey with self-completion). Our research provides valuable insight into how the type of survey and social and political environment affect recruitment to a web panel.

Added Value: This analysis focuses on the mode effect on the recruitment of participants for a scientific research panel. Our findings highlight the effect of the social and political environment, which could be used as a source of inspiration for other local studies.



Initiating Chain-Referral for Virtual Respondent-Driven Sampling – A Pilot Study with Experiments

Carina Cornesse1,2, Mariel McKone Leonard3, Julia Witton1, Julian Axenfeld1, Jean-Yves Gerlitz2, Olaf Groh-Samberg2, Sabine Zinn1

1German Institute for Economic Research; 2University of Bremen; 3German Center for Integration and Migration

Relevance & Research Question

RDS is a network sampling technique for surveying complex populations in the absence of sampling frames. The idea is simple: identify some people (“seeds”) who belong or have access to the target population, encourage them to start a survey invitation chain-referral process in their community, ensure that every respondent can be traced back along the referral chain. But who will recruit? And whom? And which strategies help initiate the referral process?

Methods & Data

We conducted a pilot study in 2023 where we invited 5,000 panel study members to a multi-topic online survey. During the survey, we asked respondents whether they would be willing to recruit up to three of their network members. If they agreed, we asked them about their relationship with those network members as well as these people’s ages, gender, and education and provided unique survey invitation links to be shared virtually. As part of the study, we experimentally varied the RDS consent wording, information layout, and survey link sharing options. We also applied a dual incentive scheme, rewarding seeds as well as recruits.

Results

Overall, 624 initial respondents (27%) were willing to invite network members. They recruited 782 people (i.e., on average 1.25 people per seed). Recruits were mostly invited via email (46%) or WhatsApp (43%) and belonged to the seeds’ family (53%) and friends (38%). Only 20% of recruits are in contact with the seed less than once a week, suggesting recruitment mostly among close ties. We find an adequate gender balance (52% female) and representation of people with migration background (22%) in our data, but a high share of people with college or university degrees (52%) and high median age (52 years). The impact of the experimental design on recruitment success is negligible.

Added Value

While in theory, RDS is a promising procedure, it often fails in practice. Among other challenges, this is commonly due to the fact that seeds will not or only insufficiently start the chain-referral process. Our project shows in which target groups initiating RDS may work and to what extent UX enhancements may increase RDS success.

 
2:00pm - 3:00pmA6.1: Questionnaire Design Choices
Location: Seminar 1 (Room 1.01)
Session Chair: Julian B. Axenfeld, German Institute for Economic Research (DIW Berlin), Germany
 

Grid design in mixed device surveys: an experiment comparing four grid designs in a general Dutch population survey.

Deirdre Giesen, Maaike Kompier, Jan van den Brakel

Statistics Netherlands, Netherlands, The

Relevance & Research Question
Nowadays, designing online surveys means designing for mixed device surveys. One of the challenges in designing mixed device surveys is the presentation of grid questions. In this experiment we compare various design options for grid questions. Our main research questions are: 1) To what extent do these different grid designs differ with respect to response quality and respondent satisfaction? 2) Does this differ for respondents on PCs and respondents on smartphones?
Methods & Data In 2023 an experiment was conducted with a sample of 12060 persons of the general Dutch population aged 16 and older. Sample units were randomly assigned to an online survey in either the standard stylesheet as currently used by Statistics Netherlands (n=2824, 40% of the sample) or an experimental stylesheet (n=7236, 60% of the sample).

Within the current stylesheet, half of the sample units were randomly assigned to the standard grid design as currently used (a table format for large screens and a stem-fixed vertical scrollable format for small screens) and the other half to a general stem-fixed grid design (stem-fixed design for both the large and the small screen). Within the experimental stylesheet, one third of the sample was randomly assigned to either the general stem-fix grid design, a carrousel grid design (in which only one item is displayed at the time and after answering one item, the next item automatically ‘flies in‘) or an accordion grid design (all items are presented vertically on one page, and answer options are automatically closed and unfolded after an item is answered).

Various indicators are used to assess response quality, e.g. break-off, item non response, straightlining, mid-point reporting. Respondent satisfaction is assessed with a set of evaluation questions at the end of the questionnaire.

Results Data are currently being analyzed.

Added Value This experiment with a general population sample adds to the knowledge of previous studies on grids. which have mainly been conducted with (access) panels.




Towards a mobile web questionnaire for the Vacation Survey: UX design challenges

Vivian Meertens, Maaike Kompier

Statistics Netherlands, Netherlands, The

Towards a mobile web questionnaire for the Vacation Survey: UX design challenges

Vivian Meertens & Maaike Kompier

Key words: Mobile Web Questionnaire Design, Smartphone First Design, Vacation Survey, Statistics Netherlands, UX testing, Qualitative Approach, Mixed Device Surveys

Relevance & Research Question: —your text here—

Despite the fact that online surveys are not always fit for small screens and mobile device navigation, the number of respondents that start online surveys on mobile devices instead of PC or laptop device, is still growing. Statistics Netherlands (CBS) has responded to this trend by developing and designing mixed device surveys. This study focuses on the redesign of the Vacation Survey, applying a smartphone first approach.

The Vacation Survey is a web only panel survey, that could only be completed on a PC or laptop. The layered design with a master detail approach was formatted in such a way that a large screen was needed to be able to complete the questionnaire. Despite a warning in the invitation letter that a PC or laptop should be used to complete the questionnaire, 14.5% of first-time logins in 2023 were via smartphones, resulting in a redesign with a smartphone first approach. The study examines the applicability and understandability of the Vacation Survey’s layered design, specifically its master-detail approach, from a user experience (UX) design perspective.

Results: —your text here—
This study shares key findings of the qualitative UX test conducted at the CBS Userlab. It will explore how visual design aspects influence respondent behaviour on mobile devices, stressing the importance of observing human interaction when filling in a questionnaire on a mobile phone. The results emphasize the need for thoughtful UX design in mobile web questionnaires to enhanced user engagement and response accuracy.

Added Value: —your text here
The study provides valuable insights into challenges and implications of transitioning social surveys to mobile devices. By discussing the necessary adaptations for a functional, user-friendly mobile questionnaire, this research contributes to the broader field of survey methodology, offering guidance for future survey designs that accommodate the growing trend of mobile device usage.



Optimising recall-based travel diaries: Lessons from the design of the Wales National Travel Survey

Eva Aizpurua, Peter Cornick, Shane Howe

National Centre for Social Research, United Kingdom

Relevance & Research Question: Recall-based travel diaries require respondents to report their travel behaviour over a period ranging from one to seven days. During this period, they are asked to indicate the start and end times and locations, modes of transport, distances, and the number of people on each trip. Depending on the mode, additional questions are asked to gather information on ticket types and costs or fuel types. Due to the specificity of the requested information and its non-centrality for most respondents, travel diaries pose a substantial burden, increasing the risk of satisficing behaviours and trip underreporting. Methods & Data: In this presentation, we describe key decisions made during the design of the Wales National Travel Survey. This push-to-web project includes a questionnaire and a 2-day travel diary programmed into the survey. Results: Critical aspects of these decisions include the focus of the recall (trip, activity, or location based) and the sequence of follow-up questions (interleaved vs. roster approach). Recent literature suggests that location-based diaries align better with respondents’ cognitive processes than trip-based diaries and help reduce underreporting. Therefore, a location-based travel diary was proposed with an auto-complete field to match inputs with known addresses or postcodes. Interactive maps were also proposed for user testing. While they can be particularly useful when respondents have difficulty describing locations or when places lack formal addresses, previous research warns that advanced diary features can increase drop-off rates. Regarding the follow-up sequence, due to mixed findings in the literature and limited information on the performance of these approaches in web-based travel diaries, experimentation is planned to understand how each approach performs in terms of the accuracy of the filter questions and the follow-up questions. Additionally, this presentation discusses the challenges and options for gathering distance data in recall-based travel diaries, along with learnings from the early phases of diary testing based on the application of a Questionnaire Appraisal System and cognitive/usability interviews. Added Value: These findings offer valuable insights into the design of complex web-based surveys with multiple loops and non-standard features, extending beyond travel diaries.

 
3:15pm - 4:15pmA7.1: Survey Methods Interventions 2
Location: Seminar 1 (Room 1.01)
Session Chair: Joss Roßmann, GESIS - Leibniz Institute for the Social Sciences, Germany
 

Pushing older target persons to the web: Do we still need a paper questionnaire?

Jan-Lucas Schanze, Caroline Hahn, Oshrat Hochman

GESIS - Leibniz-Institut für Sozialwissenschaften, Germany

Relevance & Research Question
While a sequential, push-to-web mode sequence is very well established in survey research and commonly used in survey practice, many large-scale social surveys still prefer to contact older target persons with a concurrent design, offering a paper questionnaire alongside a web-based questionnaire from the first letter onwards. In this presentation, we compare the performance of a sequential design with a concurrent design for target persons older than 60 years. We analyse response rates and compare the sample compositions and distributions in key items within resulting net samples. Ultimately, we aim to investigate whether we can push older respondents to the web and whether a paper questionnaire is still required for this age group.

Methods & Data
Data stems from the 10th Round of the European Social Survey (ESS) carried out in self-completion modes (CAWI/PAPI) in 2021. In Germany, a mode choice sequence experiment was implemented for all target persons older than 60 years. 50% of this group was invited with a push-to-web approach, offering a paper questionnaire in the third mailing. The control group was invited with a concurrent mode sequence, offering both modes from the beginning on.

Results
Results shows similar response rates for the concurrent design and the sequential design (AAPOR RR2: 38.4% vs. 37.3%). This difference is not statistically significant. In the concurrent group, 21% of the respondents answered the questionnaire online, while in the sequential group this was the case for 50% of all respondents. The resulting net samples are very comparable. Looking at various demographic, socio-economic, attitudinal, and behavioural items, no significant differences were found. In contrast, elderly respondents answering online are younger, more often male, much better educated, economically better off, more politically interested, or more liberal towards immigrants than their peers answering the paper questionnaire.

Added Value
Online questionnaires are considered as not fully appropriate for surveying the older population. This research shows that a higher share of this group can be pushed to the web without negative effects for response rate or sample composition. However, a paper questionnaire is still required for improving the sample composition.



Clarification features in web surveys: Usage and impact of “on-demand” instructions

Patricia Hadler, Timo Lenzner, Ranjit K. Singh, Lukas Schick

GESIS - Leibniz Institute for the Social Sciences, Germany

Relevance & Research Question
Web surveys offer the possibility to include additional clarifications to a survey question via info buttons that can be placed directly beside a word in the question text or next to the question. Previous research on the use of these clarifications and their impact on survey response is scarce.
Methods & Data
Using the non-probability Bilendi panel, we randomly assigned 2,000 respondents to a condition in which they A) were presented clarifications as directly visible instructions under the question texts, B) could click / tip on clarifications via an info button next to the word the respective clarification pertained to, C) could click / tip on clarifications via an info button to the right of the respective question text or D) received no clarifications at all. All questions used an open-ended numeric answer format and respondents were likely to give a smaller number as a response if they read the clarification.
Results
Following the last survey question that contained a clarification, we asked respondents in conditions A) through C) whether they had clicked / tipped on or read the clarification. In addition, we measured the use of the on-demand clarifications using a client-side paradata script. Results showed that while 24% (B) and 15% (C) of respondents claimed to have clicked on the last-shown on-demand clarification, only 14% (B) and 6% (C) actually did so for at least one question with clarification. Moreover, the responses to the survey question did not differ significantly between the conditions with on-demand instructions (B and C) and the condition with no clarifications (D). Thus, the only way to ensure that respondents adhere to a clarification is to present it as an always visible instruction as in condition A.
Added Value
The results demonstrate that presenting complex survey questions remains challenging. Even if additional clarification is needed for some respondents only, this clarification should be presented to all respondents; however, with the potential disadvantage of increasing response burden. To learn more about how respondents process clarification features, we are currently carrying out a qualitative follow-up study applying cognitive interviewing.

 

 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 24
Conference Software: ConfTool Pro 2.8.101
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany