General Online Research Conference 2024 (GOR 24)
Rheinische Fachhochschule Cologne - Campus Vogelsanger Straße
21 - 23 February 2024
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview | |
Location: Seminar 1 (Room 1.01) Rheinische Fachhochschule Köln Campus Vogelsanger Straße Vogelsanger Str. 295 50825 Cologne Germany |
Date: Thursday, 22/Feb/2024 | |
10:45am - 11:45am | A1: Survey Methods Interventions 1 Location: Seminar 1 (Room 1.01) Session Chair: Almuth Lietz, Deutsches Zentrum für Integrations- und Migrationsforschung (DeZIM), Germany |
|
Providing Appreciative Feedback to Optimizing Respondents – Is Positive Feedback in Web Surveys Effective in Preventing Non-differentiation and Speeding? Technical University of Darmstadt, Germany Relevance & Research Question Interactive feedback to non-differentiating or speeding respondents has proven effective in reducing satisficing behavior in Web surveys (Couper et al. 2017; Kunz & Fuchs 2019). In this study we tested the effectiveness of appreciative dynamic feedback to respondents who already provide well-differentiated answers and who already take sufficient time in a grid question. This feedback was expected to elevate overall response quality by motivating optimizing respondents to keep response quality high. Methods & Data About N=1,900 respondents from an online access panel in Germany participated in a general population survey on “Democracy and Politics in Germany”. In this study, two 12-item grid questions were selected for randomized field-experiments. Respondents were assigned to either a control group with no feedback, or to experimental group 1 receiving feedback when providing non-differentiated (experiment 1) or fast answers (experiment 2) or to experimental group 2 receiving appreciative feedback when providing well differentiated answers (experiment 1) or when taking sufficient time to answer (experiment 2). Interventions were implemented as dynamic feedback appearing as embedded text bubbles on the question page up to four times and disappearing automatically. Results Results suggest that appreciative feedback to optimizing respondents has only limited positive effects on response quality. By contrast, we see indications of deteriorating effects when praising optimizing respondents for their efforts. We speculate that appreciative feedback to optimizing respondents is perceived as an indication that they process the question more careful than necessary. Comparing various types of attention checks in web-based questionnaires: Experimental evidence from the German Internet Panel and the Swedish Citizen Panel 1GESIS - Leibniz Institute for the Social Sciences, Germany; 2SOM Institute, University of Gothenburg, Sweden Relevance & Research Question Evaluating methods to prevent and detect inattentive respondents in web surveys 1Institute for Employment Research (IAB), Germany; 2LMU Munich; 3University of Mannheim; 4NYU Relevance & Research Question Inattentive respondents pose a substantial threat to data quality in web surveys. In this study, we evaluate methods for preventing and detecting inattentive responding and investigate its impacts on substantive research. We use data from two large-scale non-probability surveys fielded in the US. Our analysis consists of four parts: First, we experimentally test the effect of asking respondents to commit to providing high-quality responses at the beginning of the survey on various data quality measures (attention checks, item nonresponse, break-offs, straightlining, speeding). Second, we conducted and additional experiment to compare the proportion of flagged respondents for two versions of an attention check item (instructing them to select a specific response vs. leaving the item blank). Third, we propose a timestamp-based cluster analysis approach that identifies clusters of respondents who exhibit different speeding behaviors and in particular likely inattentive respondents. Fourth, we investigate the impact of inattentive respondents on univariate, regression, and experimental analyses. First, our findings show that the commitment pledge had no effect on the data quality measures. As indicated by the timestamp data, many respondents likely did not even read the commitment pledge text. Second, instructing respondents to leave the item blank instead of providing a specific response significantly increased the rate of flagged respondents (by 16.8 percentage points). Third, the timestamp-based clustering approach efficiently identified clusters of likely inattentive respondents and outperformed a related method, while providing additional insights on speeding behavior throughout the questionnaire. Fourth, we show that inattentive respondents can have substantial impacts on substantive analyses. Added Value The results of our study may guide researchers who want to prevent or detect inattentive responding in their data. Our findings show that attention checks should be used with caution. We show that paradata-based detection techniques provide a viable alternative while putting no additional burden on respondents. |
12:00pm - 1:15pm | A2: Mixing Survey Modes Location: Seminar 1 (Room 1.01) Session Chair: Jessica Daikeler, GESIS, Germany |
|
Navigating the Digital Shift: Integrating Web in IAB (Panel) Surveys Institut für Arbeitsmarkt- und Berufsforschung, Germany Relevance & Research Question In the realm of social and labor market research, a noteworthy transformation has unfolded over the past few years, marking a departure from conventional survey methods. Traditionally, surveys were predominantly conducted through telephone interviews or face-to-face interactions. These methods, while effective, were time-consuming and resource-intensive. However, with the rapid advancement of technology, there has been a significant paradigm shift towards utilizing online modes for data collection. The emergence of the web mode has revolutionized the landscape of surveys, offering a more efficient and cost-effective means of gathering information. Online surveys provide researchers with a broader reach, enabling them to engage with diverse populations across geographical boundaries. Moreover, the convenience and accessibility of web-based surveys have contributed to increased respondent participation. As we navigate the digital age, the web mode has become increasingly integral in shaping the methodologies of social and labor market research. Its versatility, speed, and ability to cater to a global audience underscore its growing importance in ensuring the accuracy and comprehensiveness of data collection in these vital fields. Methods & Data In this paper, we focus on the largest panel surveys conducted by the Institute for Employment Research. These include the Panel Labor Market and Social Security (PASS), the IAB Establishment Panel (IAB-EP), the Linked Personnel Panel (LPP), consisting of both employer and employee surveys, and the IAB Job Vacancy Survey. Historically, all these surveys employed traditional data collection methods. However, in recent years, they all have undergone a transition by incorporating or testing the inclusion of the web mode. The incorporation of the web mode in key Institute for Employment Research panel surveys is crucial in the digital age. This transition enhances efficiency, reduces costs, and broadens participant diversity, ensuring studies remain methodologically robust and adaptable to the evolving digital landscape. Effect of Incentives in a mixed-mode Survey of Movers University of Bonn, Germany Relevance & Research Question The use of incentives to reduce unit nonresponse in surveys is an established and effective practice. Prepaid incentives have been shown to increase participation rates, especially for postal surveys. As surveys keep moving online and response rates keep dropping, the use of incentives and its differential effect on survey modes need to be further investigated. In our experiment, we investigate both the effects of survey mode and incentives for participation rates in a postal/web mixed-mode survey. In particular, we aim to answer the questions: i) In which sociodemographic groups do incentives work (particularly well)? ii) Is the effect of incentives affected by survey mode? iii) How does data quality differ between incentivized and non-incentivized participants? Methods & Data Our data is based on a random sample of all residents who moved from two neighborhoods of Cologne, Germany, between 2018 and 2022. Addresses were provided by the city's Office for Urban Development and Statistics. We were also provided with the age and gender of all selected residents as reported on their official registration. For the experiment, we randomly selected 3000 persons. Of those, 2000 received postal invitations to a web survey, while 1000 received a paper questionnaire with the option to participate online. In both groups, 500 participants were randomly selected to receive a prepaid incentive of 5 euros cash with the postal invitation. Results Our design yielded a good response rate of around 35% overall (47% with incentives and 26% without). Over 80% participated in the online mode. As we have information on the age and gender of the whole sample, including non-responders, detailed analyses on the effectiveness of incentives and their possible effect on data quality (measured by the share of “non-substantive” answers, response styles, and the amount of information provided in open-ended questions), will be presented. Added Value With this paper, we contribute to the literature on the effect of incentives, particularly on the comparison of survey modes. As our data is based on official registration and we have reliable information on non-responders, our results on the effects of incentives are of high quality. Mode Matters Most, Or Does It? Investigating Mode Effects in Factorial Survey Experiments 1Institute for Employment Research, Germany; 2University of Bamberg, Germany; 3Ludwig-Maximilians University Munich, Germany Relevance & Research Question Factorial survey experiments (FSEs), such as vignettes, have increased in popularity as they have proven to be of great advantage when collecting opinions on sensitive topics. Generally, FSEs are conducted via self-administered interviews in order to allow the participant to understand and assess the given scenario entirely. However, many establishment panels, such as the BeCovid establishment panel in Germany, rely on interviewer-administered data collection (e.g. telephone interviews), but could also benefit from using FSEs when interested in collecting opinions on more sensitive topics. Thus, the question emerges whether FSEs conducted via telephone result in similar results compared to web-based interviews. Furthermore, it would be of great interest to know whether these modes differ in their answer behavior for FSEs, such as straightlining, extreme responding or item non-response. Methods & Data To shed light on this issue, a mode experiment was conducted in the BeCovid panel in which a random subset of telephone respondents was assigned to complete a vignette module online (versus continuing with the telephone mode). Respondents were given a set of 4 vignettes varying in six dimensions followed by two follow-up questions regarding the person’s success in the application process. Additional to various descriptive analyses we run multilevel regressions (random intercept model) to take the multiple levels into account. Results The analysis shows no overall difference in the results of the random intercept model when controlling for the mode. However, there are significant differences between the modes regarding specific dimensions of the vignette, which could be described as sensitive. Furthermore, CATI shows an increase of straightlining as well as extreme responding, but no influence on the probability of acquiescence bias or central tendency bias. Lastly, respondents interviewed via telephone lead to more item non-response. Added Value This study shows that conducting FSEs through telephone interviews is feasible, but is associated with certain limitations. Depending on the subject matter, these interviews might fail to accurately capture genuine opinions, instead reflecting socially accepted responses. Additionally, they may result in diminished data quality due to satisficing and inattention.
|
3:45pm - 4:45pm | A3.1: Solutions for Survey Nonresponse Location: Seminar 1 (Room 1.01) Session Chair: Oriol J. Bosch, University of Oxford, United Kingdom |
|
Does detailed information on IT-literacy help to explain nonresponse and design nonresponse adjustment weights in a probability-based online panel? 1GESIS, Germany; 2University of Bern Relevance & Research Question The generalizability of inference from online panels is still challenged by the digital divide. Newer research concludes that not only individuals who do not have Internet access are under-represented in online panels but also those who do not feel IT-literate enough to participate which is potentially leading to nonresponse bias. Weighting methods can be used to reduce bias from nonresponse if they include characteristics that are both correlated to nonresponse and the variable(s) of interest. In our study we assess the potential of asking nonrespondents about their IT-literacy in a nonresponse follow-up questionnaire on improving nonresponse weighting and reducing bias. Our research questions are: 1.) Does including information on IT-literacy collected in the recruitment survey improve nonresponse models for online panel participation compared to standard nonresponse models including socio-demographics only? 2.) Does including IT-literacy improve nonresponse adjustment? Methods & Data Data are collected in the 2018 recruitment of a refreshment sample of the probability-based German Internet Panel (GIP). Recruitment was conducted by sending invitation letters for the online panel by postal mail. Sampled individuals who were not willing or able to participate in the recruitment online were asked to fill in a paper-and-pencil questionnaire asking about their IT-literacy. The questionnaire was experimentally fielded in the first invitation or reminder mailings. The control group did not receive a paper questionnaire. Results We find IT-literacy to explain nonresponse to the GIP over and above the standard socio-demographic variables frequently used in nonresponse modeling. Nonresponse weights including measures of IT-literacy are able to reduce bias for variables of interest that are related to IT-literacy. Added Value Online surveys bear the risk of severe bias for any variables of interest that are connected to IT-literacy. Fielding a paper-and-pencil nonresponse follow-up survey asking about IT-literacy can help to improve nonresponse weights and reduce nonresponse bias. Youth Nonresponse in the Understanding Society Survey: Investigating the Impact of Life Events Utrecht University, The Netherlands Relevance & Research Question Survey response rate are declining worldwide, particularly among young individuals. This trend is evident in both cross-sectional and longitudinal surveys, such as Understanding Society, where young people exhibit a higher likelihood of either missing waves or dropping out entirely. This paper aims to explore why young individuals exhibit lower participation rates in Understanding Society. Specifically, we investigate the hypothesis that young people experience more life events such as a change in job, relationship status and a move of house, and it is the occurrence of such life events that are associated with a higher likelihood to not participate in the survey. Methods & Data The data source is Understanding Society, a mixed-mode probability-based general population panel study in the UK. We analyze individuals aged 18-44 at Understanding Society's Wave 1, and we follow them until Wave 12. We consider four age groups: 18-24 (youth), 25-31 (early adulthood), 32-38 (late adulthood) and 39-45 middle age (reference group for comparison). In order to study the effect of life events on attrition, we applied the Discrete-Time Multinomial Hazard Model. In this model the time is entered as a covariate and the outcome variable is the survey participation indicator (interview, noncontact, refusals or other). The outcome is modeled as a function of lagged covariates, including demographics, labor market participation, qualifications, household structure and characteristics, marital status and mobility, as well as binary indicators for life event-related status changes. Consistent with existing literature, our findings reveal that younger respondents, as well as those with an immigration background, lower education, and unemployment status, are less likely to participate. We also demonstrate that changes in job status and relocation contribute particularly to attrition, with age remaining a significant factor. As many household surveys are moving online to save costs, the findings of this study will offer valuable insights for survey organizations. This paper enriches our understanding of youth nonresponse and presents practical strategies for retaining them. This project is funded by the Understanding Society Research Data Fellowship. Exploring incentive preferences in survey participation: How do socio-demographic factors and personal variables influence the choice of incentive? Deutsches Zentrum für Integrations- und Migrationsforschung (DeZIM), Germany Relevance & Research Question Methods & Data Results Added Value |
5:00pm - 6:00pm | A4.1: Innovation in Interviewing & Coding Location: Seminar 1 (Room 1.01) Session Chair: Jessica Donzowa, Max Planck Institute für demographische Forschung, Germany |
|
Exploring effects of life-like virtual interviewers on respondents’ answers in a smartphone survey 1German Center for Higher Education Research and Science Studies (DZHW); 2Leibniz University Hannover; 3University of Michigan; 4GESIS - Leibniz Institute for the Social Sciences Relevance & Research Question API vs. human coder: Comparing the performance of speech-to-text transcription using voice answers from a smartphone survey 1German Center for Higher Education Research and Science Studies (DZHW); 2Leibniz University Hannover; 3GESIS - Leibniz Institute for the Social Sciences Relevance & Research Question Can life-like virtual interviewers increase the response quality of open-ended questions? 1GESIS Leibniz Institute for the Social Sciences, Germany; 2DZHW; Leibniz University Hannover Relevance & Research Question Open-ended questions in web surveys suffer from lower data quality compared to in-person interviews, resulting in the risk of not obtaining sufficient information to answer the research question. Emerging innovations in technology and artificial intelligence (AI) make it possible to enhance the survey experience for respondents and to get closer to face-to-face interactions in web surveys. Building on these innovations, we explore the use of life-like virtual interviewers as a design aspect in web surveys that might motivate respondents and thereby improve the quality of the responses. We investigate the question of whether a virtual interviewer can help to increase the response quality of open-ended questions. Methods & Data In a between-subjects design, we randomly assign respondents to four virtual interviewers and a control group without an interviewer. The interviewers vary with regard to gender and visual appearance (smart casual vs. business casual). We compare respondents’ answers to two open-ended questions embedded in a smartphone web survey with participants of an online access panel in Germany (n=2,000). Results The web survey will run in November 2023. After data collection, we analyze responses to the open-ended questions based on various response quality indicators (i.e., probe nonresponse, number of words, number of topics, response times). Added ValueThe study provides information on the value of implementing virtual interviewers in web surveys to improve respondents experience and data quality, particularly for open-ended questions. |
Date: Friday, 23/Feb/2024 | |
11:45am - 12:45pm | A5.1: Recruiting Survey Participants Location: Seminar 1 (Room 1.01) Session Chair: Olga Maslovskaya, University of Southampton, United Kingdom |
|
Recruiting online panel through face-to-face and push-to-web surveys. HUN-REN Centre for Social Sciences, Hungary Relevance & Research Question: This presentation focuses on the difficulties and solutions related to recruiting web panels through probability-based face-to-face and push-to-web surveys. It also compares the panel composition when using two different survey modes for recruitment. Methods & Data: As part of the ESS SUSTAIN-2 project, a webpanel was recruited in 2021/22 through a face-to-face survey of ESS R10 in 12 countries. Unfortunately, the recruitment rate was low and the sample size achieved in Hungary was inadequate for further analysis. To increase the size of the webpanel (CRONOS-2), the Hungarian team initiated a probability-based mixed-mode self-completion survey (push-to-web design). Respondents were sent a post inviting them to go online or complete a questionnaire, which was identical to the interviewer-assisted ESS R10 survey. Results: We will present our findings on how the type of survey affects recruitment to a web panel through probability sampling. We will begin by introducing the design of the two surveys, then discuss the challenges encountered in setting up the panel, and finally compare the composition of the panel recruited through the two surveys (interviewer-assisted ESS R10 and push-to-web survey with self-completion). Our research provides valuable insight into how the type of survey and social and political environment affect recruitment to a web panel. Added Value: This analysis focuses on the mode effect on the recruitment of participants for a scientific research panel. Our findings highlight the effect of the social and political environment, which could be used as a source of inspiration for other local studies. Initiating Chain-Referral for Virtual Respondent-Driven Sampling – A Pilot Study with Experiments 1German Institute for Economic Research; 2University of Bremen; 3German Center for Integration and Migration Relevance & Research Question RDS is a network sampling technique for surveying complex populations in the absence of sampling frames. The idea is simple: identify some people (“seeds”) who belong or have access to the target population, encourage them to start a survey invitation chain-referral process in their community, ensure that every respondent can be traced back along the referral chain. But who will recruit? And whom? And which strategies help initiate the referral process? Methods & Data We conducted a pilot study in 2023 where we invited 5,000 panel study members to a multi-topic online survey. During the survey, we asked respondents whether they would be willing to recruit up to three of their network members. If they agreed, we asked them about their relationship with those network members as well as these people’s ages, gender, and education and provided unique survey invitation links to be shared virtually. As part of the study, we experimentally varied the RDS consent wording, information layout, and survey link sharing options. We also applied a dual incentive scheme, rewarding seeds as well as recruits. Results Overall, 624 initial respondents (27%) were willing to invite network members. They recruited 782 people (i.e., on average 1.25 people per seed). Recruits were mostly invited via email (46%) or WhatsApp (43%) and belonged to the seeds’ family (53%) and friends (38%). Only 20% of recruits are in contact with the seed less than once a week, suggesting recruitment mostly among close ties. We find an adequate gender balance (52% female) and representation of people with migration background (22%) in our data, but a high share of people with college or university degrees (52%) and high median age (52 years). The impact of the experimental design on recruitment success is negligible. Added Value While in theory, RDS is a promising procedure, it often fails in practice. Among other challenges, this is commonly due to the fact that seeds will not or only insufficiently start the chain-referral process. Our project shows in which target groups initiating RDS may work and to what extent UX enhancements may increase RDS success. |
2:00pm - 3:00pm | A6.1: Questionnaire Design Choices Location: Seminar 1 (Room 1.01) Session Chair: Julian B. Axenfeld, German Institute for Economic Research (DIW Berlin), Germany |
|
Grid design in mixed device surveys: an experiment comparing four grid designs in a general Dutch population survey. Statistics Netherlands, Netherlands, The Relevance & Research Question Within the current stylesheet, half of the sample units were randomly assigned to the standard grid design as currently used (a table format for large screens and a stem-fixed vertical scrollable format for small screens) and the other half to a general stem-fixed grid design (stem-fixed design for both the large and the small screen). Within the experimental stylesheet, one third of the sample was randomly assigned to either the general stem-fix grid design, a carrousel grid design (in which only one item is displayed at the time and after answering one item, the next item automatically ‘flies in‘) or an accordion grid design (all items are presented vertically on one page, and answer options are automatically closed and unfolded after an item is answered). Various indicators are used to assess response quality, e.g. break-off, item non response, straightlining, mid-point reporting. Respondent satisfaction is assessed with a set of evaluation questions at the end of the questionnaire. Results Data are currently being analyzed.
Towards a mobile web questionnaire for the Vacation Survey: UX design challenges Statistics Netherlands, Netherlands, The Towards a mobile web questionnaire for the Vacation Survey: UX design challenges Vivian Meertens & Maaike Kompier Key words: Mobile Web Questionnaire Design, Smartphone First Design, Vacation Survey, Statistics Netherlands, UX testing, Qualitative Approach, Mixed Device Surveys Relevance & Research Question: —your text here— Despite the fact that online surveys are not always fit for small screens and mobile device navigation, the number of respondents that start online surveys on mobile devices instead of PC or laptop device, is still growing. Statistics Netherlands (CBS) has responded to this trend by developing and designing mixed device surveys. This study focuses on the redesign of the Vacation Survey, applying a smartphone first approach. The Vacation Survey is a web only panel survey, that could only be completed on a PC or laptop. The layered design with a master detail approach was formatted in such a way that a large screen was needed to be able to complete the questionnaire. Despite a warning in the invitation letter that a PC or laptop should be used to complete the questionnaire, 14.5% of first-time logins in 2023 were via smartphones, resulting in a redesign with a smartphone first approach. The study examines the applicability and understandability of the Vacation Survey’s layered design, specifically its master-detail approach, from a user experience (UX) design perspective. Results: —your text here— Added Value: —your text here Optimising recall-based travel diaries: Lessons from the design of the Wales National Travel Survey National Centre for Social Research, United Kingdom Relevance & Research Question: Recall-based travel diaries require respondents to report their travel behaviour over a period ranging from one to seven days. During this period, they are asked to indicate the start and end times and locations, modes of transport, distances, and the number of people on each trip. Depending on the mode, additional questions are asked to gather information on ticket types and costs or fuel types. Due to the specificity of the requested information and its non-centrality for most respondents, travel diaries pose a substantial burden, increasing the risk of satisficing behaviours and trip underreporting. Methods & Data: In this presentation, we describe key decisions made during the design of the Wales National Travel Survey. This push-to-web project includes a questionnaire and a 2-day travel diary programmed into the survey. Results: Critical aspects of these decisions include the focus of the recall (trip, activity, or location based) and the sequence of follow-up questions (interleaved vs. roster approach). Recent literature suggests that location-based diaries align better with respondents’ cognitive processes than trip-based diaries and help reduce underreporting. Therefore, a location-based travel diary was proposed with an auto-complete field to match inputs with known addresses or postcodes. Interactive maps were also proposed for user testing. While they can be particularly useful when respondents have difficulty describing locations or when places lack formal addresses, previous research warns that advanced diary features can increase drop-off rates. Regarding the follow-up sequence, due to mixed findings in the literature and limited information on the performance of these approaches in web-based travel diaries, experimentation is planned to understand how each approach performs in terms of the accuracy of the filter questions and the follow-up questions. Additionally, this presentation discusses the challenges and options for gathering distance data in recall-based travel diaries, along with learnings from the early phases of diary testing based on the application of a Questionnaire Appraisal System and cognitive/usability interviews. Added Value: These findings offer valuable insights into the design of complex web-based surveys with multiple loops and non-standard features, extending beyond travel diaries. |
3:15pm - 4:15pm | A7.1: Survey Methods Interventions 2 Location: Seminar 1 (Room 1.01) Session Chair: Joss Roßmann, GESIS - Leibniz Institute for the Social Sciences, Germany |
|
Pushing older target persons to the web: Do we still need a paper questionnaire? GESIS - Leibniz-Institut für Sozialwissenschaften, Germany Relevance & Research Question Methods & Data Results Added Value Clarification features in web surveys: Usage and impact of “on-demand” instructions GESIS - Leibniz Institute for the Social Sciences, Germany Relevance & Research Question |
Contact and Legal Notice · Contact Address: Privacy Statement · Conference: GOR 24 |
Conference Software: ConfTool Pro 2.8.101 © 2001–2024 by Dr. H. Weinreich, Hamburg, Germany |