Large-scale social surveys without field interviewers in the UK: An evidence review
Cristian Domarchi1, Olga Maslovskaya1, Peter J. Lynn2, Rory Fitzgerald3, Nhlanhla Ndebele3, Ruxandra Comanaru3
1University of Southampton, United Kingdom; 2University of Essex, United Kingdom; 3City, University of London, United Kingdom
Relevance & Research Question
Data collection organisations are shifting toward new approaches, with social surveys undergoing significant design and implementation changes. Since the COVID-19 pandemic, agencies have increasingly moved to online data collection due to dwindling response rates and rising fieldwork costs. A key challenge for self-completion general population surveys is the absence of field interviewers to facilitate recruitment and guiding respondents through the survey process. This research examines the UK survey landscape, aiming to identify recruitment methods for self-administered surveys, that can produce more representative samples of the general population.
Methods & Data
We present findings from an information request sent to the UK’s nine most important survey agencies. We collected information on surveys without field interviewers conducted between 2018 and early 2024, including publicly available technical and methodological reports and other survey materials, along with internal reports provided by the agencies. We processed and codified this information, building a spreadsheet containing 144 instances of 59 longitudinal and cross-sectional surveys, along with 227 communication materials.
Results
The responses for the surveys in our dataset use 57% online, 38% paper, and 5% telephone modes. Most surveys (84%) offer incentives to participants, with 92% being monetary and only 33% given unconditionally. Response rates vary widely – household-based cross-sectional surveys tend to have lower response rates (81% at 30% response or lower) than individual-based ones (47% at 30% or lower). Longitudinal surveys generally have the highest response rates. While only 35% of reports assess sample representativeness, the general trend confirms that mixed-mode surveys yield more representative samples than single-mode surveys.
Added Value
To our knowledge, this review is the first coordinated effort to collate and summarise recruitment strategies for surveys without field interviewers in the UK. It covers sampling design, communication strategies and materials, incentivisation, fieldwork procedures, response rates, and report quality assessments. Our dataset provides insights into the current state of survey practice and helps identifying practices that might contribute towards higher response rates and better sample composition.
Does web as first mode in a mixed-mode establishment survey affect the data quality?
Corinna König, Joe Sakshaug
Institute for Employment Research, Germany
Relevance & Research Question
Due to declining response rates and higher survey costs, establishment surveys are (or have been) transitioning from traditional interviewer modes to online and mixed-mode data collection. Previous analyses have shown that mixed-mode designs maintain response rates at lower costs compared to face-to-face designs, but the question remains to what extent introducing the online mode affects measurement quality – this has rarely been addressed in the establishment survey literature.
Methods & Data
The Establishment Panel of the Institute for Employment Research (IAB) was primarily a face-to-face survey until 2018. Since then, the IAB has experimented with administering a sequential web-first followed by face-to-face mixed-mode design versus the traditional face-to-face design. We address our research question by using this data and comparing the survey responses from the single- and mixed-mode experimental groups to corresponding administrative data from employer-level social security notifications. The accuracy of survey responses in both mode designs is assessed and measurement equivalence is evaluated. Especially a lot of open-ended variables on numbers of employees with certain characteristics is used and additionally, we report on differences in accuracy between the individual web and face-to-face modes. Furthermore, we consider differences for several alternative data quality indicators, including item nonresponse, social desirability responding, and the use of filter questions. To account for selection and nonresponse bias weights are used throughout the analysis and as sensitivity checks, weights are estimated in different ways. In addition to propensity scores, random forest, and extreme gradient boosting were also applied.
Results
Preliminary results show that measurement error bias in online interviews is sometimes larger than in face-to-face interviews but compared to the mixed-mode design the difference is not significant anymore. Looking at sensitive questions it cannot be confirmed that online respondents answer more socially desirable. Further findings indicate slightly larger item nonresponse in the online mode but considering the sequential mode design no difference can be found.
Added Value
Thus, the study provides comprehensive insights into data quality for mixed-mode data collection in establishment surveys and informs survey practitioners about the implications of switching from single- to mixed-mode designs in large-scale establishment panels.
Examining Differences in Face-to-Face and Self-Administered Mixed-Mode Surveys: Insights from a General Social Survey
Alexandra Asimov
GESIS, Germany
Relevance & Research Question
General social surveys are traditionally conducted face-to-face, maintaining long time series for tracking public opinion trends. To ensure comparability, survey designs typically change minimally over time. However, face-to-face surveys have been experiencing declining response rates and higher costs. As a result, self-administered mixed-mode designs have gained popularity due to their ability to circumvent these challenges. Since switching modes is a major methodological change, investigating data comparability with the original mode is critical. Furthermore, self-administered mixed-mode designs can be implemented in two ways: concurrent or sequential. Each result in different proportions of web and mail survey responses. This raises the question: Does this difference in proportions affect comparability with face-to-face surveys?
Methods & Data
This study uses the German General Survey 2023 (ALLBUS), which surveys the general population aged 18 and older and is traditionally conducted face-to-face. In 2023, ALLBUS included three randomized experimental groups: (1) face-to-face, (2) concurrent self-administered mixed-mode (mail and web), and (3) sequential self-administered mixed-mode (mail and web). This study examines data comparability by evaluating differences in nonresponse bias, sample composition, and measurement between face-to-face and the two mixed-mode designs.
Results
Overall, both self-administered mixed-mode designs produce similar results, with both showing slight strengths. The sequential design is slightly more similar to the face-to-face design in terms of nonresponse bias and sample composition. In contrast, the concurrent design achieves slightly smaller measurement differences compared to the face-to-face design.
Added Value
This study offers valuable insights into the shift from face-to-face to self-administered mixed-mode designs by comparing concurrent and sequential approaches. It highlights their strengths in maintaining data comparability, with both designs producing similar results overall. This indicates that web and mail modes are comparable, as the proportion of these two modes varies between designs.
|