Session | ||
A4.1: Data Quality in Online Surveys
| ||
Presentations | ||
Data Quality Indicators: Some Practical Recommendation GESIS, Germany Relevance & Research Question Fielding a long online survey: Evidence from the first Generations and Gender Survey (GGS) in the UK University of Southampton, United Kingdom Relevance & Research Question Our team has collected the first Generations and Gender Survey (GGS) in the UK. This survey used push-to-web design with online-only mode available for respondents. The approximate length of time for survey completion was specified as being around 50 minutes for the respondents. The length of the online surveys topic has recently attracted a lot of attention from survey methodologists as many high quality social surveys moved to online data collection or mixed-mode designs in the recent years. The rule of thumb until recently was not to have online surveys of the length exciding 10-20 minutes. Recent experiments conducted by the European Social Survey (ESS) suggest that it is possible to conduct longer (35 minutes or even 55 minutes) online surveys without significant reduction in data quality. However, more evidence is needed to establish the optimal length of online social surveys. Methods & Data In this paper, we will present the evidence for fielding a long online probability-based survey. We reflect on the challenges and opportunities of conducting a long probability-based online data collection in the UK by reporting on nonresponse, break-off rates, and quality of responses. We also investigate the de-briefing questions in which respondents were able to reflect on how they felt about the survey. We will compare paradata for the length of time it took respondents to complete the survey with the respondents’ perception on how long the survey was when these paradata become available in April 2023. Results Preliminary results suggest that despite the fact that the UK GGS survey was long and complex, 82% of respondents found the survey “not at all difficult”. High proportion of respondents felt that the survey was about as long as they expected (47%) with further 19% felt that the survey was shorter than they expected. Also, another positive outcome of the survey was that 82% of participants gave consent to be recontacted for the second wave of the UK GGS survey. Added Value Our findings provide evidence for the optimal length for long and complex online social surveys and have important implications for survey practice. Ability to identify fakers in online surveys: Comparison of BIDR.Short.24 and MCSD-SF University of Iceland, Iceland Relevance & Research Question: Socially desirable responding (SDR) is a common problem in self-report measures, as the tendency to present oneself favorably to others can influence the honesty of responders. One facet of SDR is faking, an intentional misrepresentation in self-report. There are, however, two kinds of faking; faking good and faking bad. Faking good involves deliberately presenting oneself favorably to others, whereas faking bad involves deliberately presenting oneself in an undesirable manner. There are several ways to detect faking, one of them being SDR scales. Two of the most widely used scales are the Marlowe-Crowne Social Desirability Scale (MCSDS) and the Balanced Inventory of Desirable Responding (BIDR). To evaluate which SDR measure is better suited to detect SDR, one can compare their ability to detect faking. A previous study comparing the ability of MCSDS and BIDR to detect faking found that MCSDS outperformed BIDR in identifying both types of faking. A limitation of the applicability of those results is the fact that the comparison was between full-length versions of the measures but short-form versions are usually more practical as they reduce response fatigue. For that reason, the current study compared two short-forms’ ability to detect faking, MCSD-SF (short-form version of MCSDS) and IM-Short.24 (short-form of the IM subscale of BIDR). Methods & Data: Participants were recruited online through a probability-based panel. The final sample consisted of 106 men and 122 women, others chose not to answer the gender question. Participants were randomly assigned to one of three groups: 1) standard instructions, 2) fake good instructions, or 3) fake bad instructions and then asked to complete both the MCSD-SF and the IM-Short.24. Results: Discriminant function analyses and receiver operating characteristic curve analyses showed that, overall, MCSD-SF outperformed IM-Short.24 in identifying faking good, while IM-Short.24 outperformed MCSD-SF in identifying faking bad. Added Value: These findings show a clear preference for the use of MCSD-SF for identifying fake good responses and IM-Short.24 for identifying fake bad responses, which should assist researchers in choosing an appropriate measure for their studies, as well as advance the use of SDR measures overall. |