Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
B6.1: Automatic analysis of answers to open-ended questions in surveys
Time:
Friday, 23/Feb/2024:
2:00pm - 3:00pm

Session Chair: Barbara Felderer, GESIS, Germany
Location: Seminar 2 (Room 1.02)

Rheinische Fachhochschule Köln Campus Vogelsanger Straße Vogelsanger Str. 295 50825 Cologne Germany

Show help for 'Increase or decrease the abstract text size'
Presentations

Using the Large Language Model BERT to categorize open-ended responses to the "most important political problem" in the German Longitudinal Election Study (GLES)

Julia Susanne Weiß, Jan Marquardt

GESIS, Germany

Relevance & Research Question

Open-ended survey questions are crucial e.g., for capturing unpredictable trends, but the resulting unstructured text data poses challenges. Quantitative usability requires categorization, a labor-intensive process in terms of costs and time, especially with large datasets. In the case of the German Longitudinal Election Study (GLES) spanning from 2018 to 2022, with nearly 400,000 uncoded mentions, it prompted us to explore new ways of coding. Our objective was to test various machine learning approaches to determine the most efficient and cost-effective method for creating a long-term solution for coding responses, ensuring high quality simultaneously. Which approach is best suited for the long-term coding of open-ended mentions regarding the "most important political problem" in the GLES?

Methods & Data

Pre-2018, GLES data was manually coded. Shifting to a (partially) automated process involved revising the codebook. Subsequently, the extensive dataset comprising nearly 400,000 open responses to the question regarding the "most important political problem" in the GLES surveys conducted between 2018 and 2022 was employed. The coding process was facilitated using the Large Language Model BERT (Bidirectional Encoder Representations from Transformers). During the entire process, we tested a whole host of important aspects (hyperparameter finetuning, downsizing of the “other” category, simulations of different amounts of training data, quality control of different survey modes, using training data from 2017) before arriving at the final implementation.
Results

The "new" codebook already demonstrates high quality and consistency, evident from its Fleiss Kappa value of 0.90 for the matching of individual codes. Utilizing this refined codebook as a foundation, 43,000 mentions were manually coded, serving as the training dataset for BERT. The final implementation of coding for the extensive dataset of almost 400,000 mentions using BERT yields excellent results, with a 0/1 loss of 0.069, a Micro F1 score of 0.946 and a Macro F1 score of 0.878.
Added Value

The outcomes highlight the efficacy of the (partially) automated coding approach, emphasizing accuracy with the refined codebook and BERT's robust performance. This strategic shift towards advanced language models signifies an innovative departure from traditional manual methods, emphasizing efficiency in the coding process.



The Genesis of Systematic Analysis Methods Using AI: An Explorative Case Study

Stephanie Gaaw, Cathleen M. Stuetzer, Maznev Petko

TU Dresden, Germany

Relevance & Research Question

The analysis of open-ended questions in large-scale surveys can provide detailed insights into respondents' views that often can't be assessed with closed-ended questions. However, due to the large number of respondents, it takes a lot of resources to review the answers within open-ended questions and thus provide them as research results. This contribution aims to show the potential benefits and limitations of using AI-based tools (e.g. ChatGPT), for analyzing open-ended questions in large-scaled surveys. It therefore also aims to highlight the challenge of conducting systematic analysis methods with AI.

Methods & Data
As part of a large-scale survey on the use of AI in higher education at a major German university, open-ended questions were included to provide insight into the perceived benefits and challenges for students and lecturers of using AI in higher education. The open-ended responses were then analyzed using a qualitative content analysis. In order to verify whether ChatGPT could be used to analyze the open-ended questions in a faster manner, while maintaining the same quality of results, we asked ChatGPT to analyze the responses in a way similar to our analytical process.

Results
The results show a roadmap of letting ChatGPT analyze our open-ended data. In our case study it obtained categories and descriptions similar to those we obtained by qualitatively analyzing the data ourselves. However, 9 out of 10 times we had to re-prompt ChatGPT to specify the context for the analysis to get the appropriate results. In addition, there were some minor differences in how items were sorted into their respective categories. Yet, despite these limitations, it became clear that 80% of cases, Chat GPT assigned the responses to the derived categories more accurately than our research team did in the qualitative analysis.

Added Value
This paper provides insight into how ChatGPT can be used to simplify and accelerate the standard process of qualitative analysis under certain circumstances. We will give insights into our prompts for ChatGPT, detailed findings from comparing its results with our own, and its limitations to contribute to the further development of systematic analysis methods using AI.



Insights from the Hypersphere - Embedding Analytics in Market Research

Lars Schmedeke, Tamara Keßler

SPLENDID Research, Germany

Relevance & Research Question:

In the intersection of qualitative and quantitative research, analyzing open-ended questions remains a significant challenge for data analysts. The incorporation of AI language models introduces the complex embedding space: a realm where semantics intertwine with mathematical principles. This paper explores how Embedding Analytics, a subset of explainable AI, can be utilized to decode and analyze open-ended questions effectively.

Methods & Data:

Our approach utilized the ada_V2 encoder to transform market research responses into spatial representations on the surface of a 1,536-dimensional hypersphere. This process enabled us to analyze semantic similarities using traditional statistics as well as advanced machine learning techniques. We employed K-Means Clustering for text grouping and respondent segmentation, and Gaussian Mixture Models for overarching topic analysis across numerous responses. Dimensional reduction through t-SNE facilitated the transformation of these complex data sets into more comprehensible 2D or 3D visual representations.

Results:

Utilizing OpenAI’s ada_V2 encoder, we successfully generated text embeddings that can be plausibly clustered based on semantic content, transcending barriers of language and text length. These clusters, formed via K-Means and Gaussian Mixture Models, effectively yield insightful and automated analyses from qualitative data. The two-dimensional “cognitive constellations” created through t-SNE offer clear and accessible visualizations of intricate knowledge domains, such as brand perception or public opinion.

Added Value:

This methodology allows for a precise numerical analysis of verbatim responses without the need for labor-intensive manual coding. It facilitates automated segmentation, simplification of complex data, and even enables qualitative data to drive prediction tasks. The rich, nuanced datasets derived from semantic complexity are suitable for robust analysis using a wide range of statistical methods, thereby enhancing the efficacy and depth of market research analysis.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 24
Conference Software: ConfTool Pro 2.8.101
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany