Session | ||
4.1: Poster Session
| ||
Presentations | ||
Mapping the Way Forward: Integrating APIs into Travel Surveys National Centre for Social Research, United Kingdom Relevance & Research Question: Technological advances are making it increasingly possible to collect information that augments traditional survey research. This study explores the performance of a survey integrated with an Application Programming Interface (API) that allows respondents to identify locations on a map. Specifically, we examine the accuracy and usability of this API integration, focusing on its potential to improve data quality in the context of travel diaries. Methods & Data: This research is based on a large-scale pilot conducted in May and June of 2024, where 7,500 addresses in Wales were issued. The web survey was programmed using Blaise 5 and integrated with the Ordnance Survey Places and Names databases, enabling respondents to locate places on a map. Data were collected from 1,008 individuals, resulting in 2,743 reported journeys. Results: Most respondents confirmed that the location information was correct (92% on day 1 and 95% on day 2), indicating a high rate of accurate matches for searched locations. The most commonly reported errors were related to the start or end points of the journeys, followed by the inclusion of journeys that did not occur. In 278 instances (7.9% of all map entries), respondents selected the "I could not find my location" option and provided a free-text description instead. These descriptions varied widely in specificity, ranging from precise locations to vague place names that could correspond to multiple places. While the API integration performs well overall, some respondents encountered difficulties with location precision, suggesting that some degree of post-survey editing will be required. Added Value: This study highlights the potential of API integration to enhance survey data collection by capturing detailed and specific geographic information. Our findings suggest that while the approach is largely effective, there are areas for improvement to ensure that respondents can accurately and easily identify locations on a map. This research contributes to the growing field of technological augmentation in survey methodologies, offering insights into the practical challenges and opportunities of API-enabled geographic data collection. Uncovering Housing Discrimination: Lessons Learned from a Large-Scale Field Experiment via Immoscout24 1German Centre for Integration and Migration Research (DeZIM); 2Bielefeld University, Germany; 3Freie Universität Berlin, Germany Relevance & Research Question The experiment involved sending over 2,000 standardized rental applications via the online portal Immoscout24 to landlords in 10 major German cities. These applications systematically varied applicant names to signal different ethnic backgrounds. The poster emphasizes key aspects of the technical implementation:
Results Straightlining in CS versus AD items, completed by Phone versus PC University of Groningen Relevance & Research Question Straightlining has been shown to be more prevalent in surveys that are completed on a PC than on smartphones. However, on device use for self-completion, differential effects of different items on straightlining have not been researched extensively. Agree-disagree (AD) items are assumed to evoke more straightlining than construct specific (CS) items (i.e., items with a different response scale for each item, depending on the response dimension being evaluated). To fill this gap, we aim to answer the following research question: What are the combined effects of AD versus CS items and device use on response patterns that are assumed to be indicators of satisficing, such as straightlining? Methods & Data Our survey was conducted in November 2024, with 3,500 flyers distributed across a neighborhood in a large Dutch city with subsequent face-to-face recruitment by students. The flyers included a QR code and URL for survey access. The survey was filled out by 556 individuals completing at least 50% of the questions (and 478 completing the full questionnaire), yielding a 13% response rate at household level. A smartphone was used by 85% of the participants, whereas 15% used a PC. Respondents were randomly assigned to four blocks of either five AD items or five CS items. Straightlining was defined in two ways; in a strict definition of providing exactly the same answer two five items asked in the same battery of items and by computing within respondent variance for each battery). Results Straightlining(both in the strict definition and in terms of lower variance) occurred more frequently in AD items than in CS items, with 18% of respondents in the AD condition showing straightlining, as opposed to 5% of respondents in the CS condition. PC respondents were more likely than smartphone respondents to straightline in battery items phrased as AD items, but this effect was not found when items were phrased as CS items. Added Value Our study shows that using CS items might be more beneficial when the questionnaire is filled out on a computer than on a smartphone. Bayesian Integration of Probability and Nonprobability Web Survey Data 1IAB, Germany; 2LMU-Munich; 3Utrecht University, the Netherlands; 4University of Manchester Relevance & Research Question The popularity of non-probability sample (NPS) web-surveys is increasing due to their convenience and relatively low costs. On the contrary, traditional probability-sample surveys (PS) are suffering from decreasing response rates, with a consequent increase in survey. Integrating the two types of samples in order to overcome their respective disadvantages is one of the current challenges. We propose an original methodology to combine probability and non-probability online samples to improve analytic inference on binary model parameters. Methods & Data To combine the information coming from the two samples, we consider the Bayesian framework where inference is based on the PS and the available information from the NPS is supplied in a natural way through the prior. We focus on the logistic regression model, and conduct a simulation study to compare the performance of several priors in terms of mean-squared error (MSE) according to different selection scenarios, selection probabilities, and sample sizes. Finally, we present a real data analysis considering an actual probability-based survey and several parallel non-probability web surveys from different vendors which reflect different selection scenarios. Results Added Value The method provides a means of integrating probability and nonprobability web survey data to address important trade-offs between costs and quality/error. For survey practitioners, the method offers a systematic framework for leveraging information from nonprobability data sources in a cost-efficient manner to potentially improve upon probability-based data collections. This can be particularly fruitful for studies with modest budgets or small sample sizes, where the greatest gains in efficiency can be achieved. |