CAPI, or not CAPI – That Is the Question: Using Administratative Data to Assign the Optimal Mode for Maximizing Response-Rates in a Household Panel
Patrick Lazarevic, Marc Plate
Statistik Austria, Austria
Relevance & Research Question: Selecting the appropriate mode(s) of data collection is a major consideration for every survey. Personal interviews (CAPI) – typically recognized as the gold standard of data collection in surveys – have been increasingly called into question as the be-all and end-all of data collection methods, with a growing shift towards self-administered modes, particularly web surveys (CAWI). Reasons for this range from representation concerns due to shifting mode preferences and the flexibility these modes offer to busy respondents, all the way to practical constraints like health and safety concerns, interviewer availability, and budgetary restrictions. Yet, recruitment using CAWI alone might result in biases due to, e.g., systematic differences in digital skills. Thus, many surveys employ mixed-mode designs, raising the question of how to determine which mode should be offered to whom.
Methods & Data: The Austrian Socio-Economic Panel (ASEP), a household panel of the Austrian population, experimentally tested a tailored mode-design using administrative data to assign half of the sample’s households to their presumably preferred mode (CAPI/CAWI) while also offering the other mode after persistent non-response. The other households were randomly assigned to one of the mode-designs (CAPI-First/CAWI-First) as control groups. To evaluate the utility of the tailored mode-design concept, we employed a multi-facetted analytical approach by comparing a variety of indicators, such as the rate of proxy-interviews, the number of requested mode-changes, or the overall response-rates and resulting nonresponse bias, between the different mode-designs. Results: Results were promising regarding the tailored mode-design. For example, response-rates were consistently higher compared to the other mode-designs, while proxy-rates and the nonresponse bias were considerably lower.
Added Value: With the tailored mode-design, we present an interesting and promising alternative to already established single- or mixed-mode designs. This novel approach could prove helpful to decrease nonresponse bias and survey costs while maintaining data quality.
The Framework of Survey Behaviour: An Extension of the Framework for Web Survey Participation
Jeldrik Bakker, Jonas Klingwort, Vera Toepoel
Statistics Netherlands
Relevance & Research Question:
Why do people behave the way they do in surveys? The answer to this fundamental question in survey research can help increase survey participation, decrease break-off and improve data quality. Underneath this seemingly simple question is a complex interplay of factors influencing survey behaviour (i.e., the behaviour of (potential) respondents). While current frameworks, theories and models provide valuable insights into this behaviour, they all have limitations in understanding survey behaviour as a whole. Furthermore, none are generically applicable across survey behaviours for all modes, devices, and target populations (i.e., person, household, establishment).
Methods & Data:
We conducted an extensive literature review of both generic behavioural, and survey-specific frameworks, theories and models. Using the Framework for Web Survey Participation (Peytchev, 2009) as a starting point, we extended this framework into our generic Framework of Survey Behaviour.
Results:
The resulting framework provides a holistic view of the factors affecting the key survey decisions and the underlying behaviours that shape those decisions. The key survey decisions reflect the three main goals in survey research: getting people to start the survey, complete the survey, and provide high-quality responses. These decisions are affected by five groups of factors: environmental factors, respondent factors, interviewer factors, survey design factors, and questionnaire factors. The underlying survey behaviours that shape those decisions are diverse and range from (proxy) responding, satisficing, breaking off, and straightlining, to speeding.
Added Value:
By centralising behaviour in the framework, we offer a comprehensive approach that considers all human, organisational, and environmental elements involved in the survey process. The framework guides researchers in designing surveys and collecting high-quality data across diverse contexts. Understanding and being able to influence survey behaviour for the better is key in order to improve respondent engagement and data quality. Practical recommendations are provided, and future research areas are identified.
References:
Peytchev, A. (2009). Survey breakoff. Public Opinion Quarterly, 73(1):74–97.
Survey design features that matter: A meta-analysis using official statistics surveys of the Netherlands
Jeldrik Bakker, Jonas Klingwort, Vera Toepoel
Statistics Netherlands
Relevance & Research Question:
More and more surveys are being conducted, but response rates are declining. The solution to avoid the missing data problem is to avoid having any. Understanding which factors influence response rates is crucial for improving survey participation and reducing nonresponse bias. This study investigates which features significantly affect response rates and how they can be optimized to improve survey participation?
Methods & Data:
We conducted a multilevel meta-analysis using Statistics Netherlands’ data from 38 person population surveys with over 1200 samples. The surveys were fielded over a seven-year period (2018–2024) and had a total sample size of over 7 million people. These surveys range from one-time to recurring studies with frequencies from weekly to biennially. We used 72 factors such as respondent factors (e.g., age, gender, nationality, device use), survey design factors (e.g., year, month, mode, incentive, fieldwork period, number of contacts, topic) and page & question factors (e.g., duration, number of block, pages & questions, number of question per question type, number of introduction texts) that potentially have an effect on the response rates.
Results:
Preliminary findings suggest that the data collection mode, type of incentives, survey topic, the number of and types of questions, the device used by the respondent, and age and gender have significant effects on the response rate. Interestingly, the length of the fieldwork, the number of reminders, and the periodicity of the survey show non-significant effects.
Added Value:
This study offers comprehensive insights into improving response rates for official statistics surveys, highlighting effective data collection strategies and identifying survey designs to avoid. Utilizing a wide range of different design features, the study serves as a practical toolbox for national statical agencies, survey agencies, and survey researchers alike. Notably, it represents the first practical application of the Framework of Survey Behavior, which has also been submitted to this conference.
|