GOR 26 - Annual Conference & Workshops
Annual Conference- Rheinische Hochschule Cologne, Campus Vogelsanger Straße
26 - 27 February 2026
GOR Workshops - GESIS - Leibniz-Institut für Sozialwissenschaften in Cologne
25 February 2026
Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
7.3: Designing inclusive and engaging surveys
| ||
| Presentations | ||
Accessibility and inclusivity in self-completion surveys: An evidence review 1University of Southampton, United Kingdom; 2City St George’s, University of London; 3Institute for Social and Economic Research, University of Essex, United Kingdom Relevance & Research Question Survey research aims to understand social issues and inform effective public policy. For results to be accurate and equitable, surveys must inclusively represent diverse population sub-groups. Excluding these groups can lead to biased data and policies that perpetuate inequalities. Consequently, inclusivity is now a core principle for major statistical bodies in the UK like the UK Statistics Authority. This has led to a “respondent-centred design” approach, which argues that making surveys accessible for marginalised groups often benefits all respondents (Wilson and Dickinson 2021). However, achieving greater inclusivity involves practical trade-offs, as measures like targeted procedures, alternative response modes, or survey questionnaire translations or adaptations, can often be resource intensive. Evidence of inclusivity practices implemented as part of probability-based self-administered surveys is scarce, and research is required to determine best practice recommendations. This evidence review highlights measures that aim to increase participation for harder-to-survey population sub-groups in self-administered surveys, while maintaining the goal of obtaining high-quality, representative data. We focus on two main population subgroups: (1) individuals with disabilities and impairments and (2) individuals with literacy and/or language limitations. Results This evidence review identifies general recommendations for recruitment practices to facilitate the inclusion of these frequently excluded sub-groups. It also highlights the cost trade-offs involved in implementing these methods, beyond the ethical imperative for inclusivity. Added Value This study addresses an under-researched area by providing evidence-based, practical recommendations for enhancing participation, accessibility and inclusivity in large-scale surveys. Effectiveness of the knock-to-nudge approach for establishing contact with respondents: Evidence from the National Readership Survey (PAMCo) and National Survey for Wales (NSW) in the UK University of Southampton, United Kingdom Relevance & Research Question Knock-to-nudge is an innovative method of household contact, first introduced during the COVID-19 pandemic when face-to-face interviewing was not possible. In this approach, interviewers visit households and encourage sampled units to participate in a survey through a remote survey mode (either web or telephone) at a later date. Interviewers also can collect contact information, such as telephone numbers or email addresses, or conduct within-household selection of individuals on the doorstep if required. This approach continued to be used post-pandemic in a number of surveys, but there remains a knowledge gap regarding its advantages and limitations. It is still unclear whether knock-to-nudge approach leads to improvements in sample composition and data quality. Methods & Data We analysed data from two UK surveys: the National Readership Survey (PAMCo) and the National Survey for Wales (NSW), each of which employed different versions of the knock-to-nudge approach. Our aim was to determine whether this method improves survey participation and sample composition, and to assess how incorporating participants recruited via knock-to-nudge impacts on data quality and responses to substantive questions. We investigate these effects using descriptive analyses, statistical tests, and logistic regression models. Results Our findings demonstrate that knock-to-nudge is associated with: (1) a significant increase in response rates, (2) improved sample composition, (3) higher item non-response, and (4) significant differences in responses to substantive survey questions. Added Value This study contributes to the under-researched area of knock-to-nudge methods. The results indicate that, when carefully designed and implemented, this approach can enhance recruitment efforts and improve sample composition of the resulting samples in surveys. However, its viability as a universal solution for mixed-mode surveys depends on whether these methodological benefits outweigh the potential compromises in data quality and the additional implementation costs. How do Respondents Evaluate a Chatbot-Like Survey Design? An Experimental Comparison With a Web Survey Design Technical University of Darmstadt, Germany Relevance & Research Question: Web surveys efficiently collect data on attitudes and behaviors, but often face challenges like satisficing behavior. The increasing prevalence of respondents using smartphones to answer surveys has brought about additional design challanges. The application of a messenger design as a web survey interface offers the opportunity to mitigate some of the drawbacks of a responsive web survey desing. Recent studies have demonstrated that a chatbot-like survey design may provide higher quality responses and greater engagement, albeit with longer response times. This study explores the respondents’ evaluation of using a messenger interface in a web survey setting. Methods & Data: In 2025, a sample of 2.123 members of a non-probability online access panel in Germany answered a survey on the topic of “vacation”. The sample was cross-stratified by age and gender and limited to respondents aged 18-74. In a field-experiment employing a between-subjects designs respondents were randomly assigned to either a web survey design or to a chatbot design that mimics a messenger interface. In addition to survey duration, we assess the respondents’ evaluation concerning user experience, perceived social presence, perceived flow, ease of use and general satisfaction. Results: Overall respondents in the chatbot condition were less satisfied with the survey and it took them longer to answer the questions. Also, they experienced lower levels of flow and ease of use. There was no significant difference in the user experience and the survey related social presence was lower only for respondents using a mobile device. Older respondents, females and respondents with a higher education degree seem to evaluate the chatbot design more preferable than younger, male and respondents with lower levels of education. However, the preference for the web survey design is generally confirmed for all respondent groups irrespective of age, education and gender and also for respondents using a desktop or a mobile device. Added Value: This study contributes to an assessment of using a messenger interface for the administration of survey questions. We discuss the results in light of the recent trend towards an application of a chatbot-like interface in AI supported surveys. | ||