Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
|
Session Overview |
| Session | ||
7.1: AI and qualitative research
| ||
| Presentations | ||
AI-Conducted User Research: From Weeks to Hours Through Autonomous Interviewing Userflix, Germany Relevance & Research Question Methods & Data We developed Userflix, an end-to-end AI platform for qualitative research automation utilizing large language models fine-tuned for research methodology. The system implements: (1) AI-guided study setup through conversational project briefing, (2) real-time audio-to-audio interviews with dynamic follow-up questions, (3) visual stimuli presentation, (4) automatic transcription and analysis, and (5) automated insight extraction with traceability to source interviews. Evaluation pilots (Q3-Q4 2025) are ongoing with Nielsen Norman Group (UX research methodology assessment), Innofact and Skopos (agency workflow integration), and IKEA (multilingual European research). Partners are systematically comparing AI versus human interview quality, transcript depth, and participant experience. Results Early evaluation feedback demonstrates 95% time reduction (weeks to hours) and 90% cost reduction (€36/hour vs €500-750/interview). The AI successfully conducts multilingual interviews, generates contextual follow-up questions, and enables unprecedented scale (50-500 interviews vs traditional 8-12), allowing statistical pattern recognition in qualitative data. Nielsen Norman Group is assessing methodology soundness against established standards. Agency partners report high participant comfort, with some showing greater openness on sensitive topics with AI interviewers. The platform's 24/7 availability increased completion rates by 40% compared to scheduled interviews. Key advantages include parallel execution, elimination of interviewer bias, and consistent quality. Added Value This research demonstrates that AI can augment human researchers by handling routine execution, enabling focus on strategic interpretation. The "quantified qualitative" approach—conducting 50-500 interviews instead of 8-12—bridges qualitative depth with quantitative validation, addressing the longstanding trade-off between scale and depth. For the online research community, this represents making comprehensive qualitative research accessible to broader audiences while elevating professional researchers to strategic roles. Evaluation results will provide evidence-based guidance for AI research tool adoption and quality standards. Augmenting Qualitative Research with AI: Topic Modeling with Agentic RAG 1Freie Universität Berlin, Germany; 2Deutsche Hochschule, Germany; 3Lee Kong Chian School of Business, Singapore Management University, Singapore Relevance & Research Question Large Language Models (LLMs) increasingly shape qualitative and computational social science research, yet their use for text data analysis using topic modeling remains limited by low transparency, unstable outputs, and prompt sensitivity. Traditional approaches such as LDA often produce overlapping, generic topics, whereas LLM prompting lacks consistency and reproducibility. We introduce Agentic Retrieval-Augmented Generation (Agentic RAG) - a multi-step, agent based LLM pipeline designed to improve efficiency, transparency, consistency, and theoretical alignment in qualitative text analysis. Our study addresses two research questions: (1) How does Agentic RAG perform compared to LDA and LLM prompting in terms of topic validity, granularity, and reliability across datasets? and (2) How can Agentic RAG be extended to enable theory advancement through “lens-based” retrieval? Methods & Data We benchmark Agentic RAG against LDA and LLM prompting using three heterogeneous datasets: (i) the 20 Newsgroups corpus (online communication), (ii) the VAXX Twitter/X dataset (data on vaccine hesitancy), and (iii) a qualitative interview corpus from an organizational research context. Agentic RAG is implemented as a model-agnostic, agent-based pipeline that orchestrates retrieval, data analysis, and topic generation. In our analysis, Agentic RAG was applied to produce topics using different GPT models (GPT-3.5, GPT-4o, GPT-5). We evaluate all methods using standardized metrics: topic validity, topic overlap, and inter-round semantic reliability, computed via cosine similarity measures that extend prior topic quality metrics. Results Across datasets, Agentic RAG consistently yields high-validity topics with minimal redundancy compared to both LDA and LLM prompting. Whereas LDA and LLM prompting perform well only on specific datasets, Agentic RAG maintains performance across heterogeneous data architectures, while being more transparent and efficient. Based on these results, we derive a structured trade-off table that summarizes the strengths and limitations of all approaches, providing qualitative and computational scholars with clear guidance for selecting an appropriate text analysis method. Added Value Our findings demonstrate that Agentic RAG offers a scalable, transparent, and reproducible approach for qualitative text analysis. The method strengthens the rigor of LLM-based qualitative research by enabling more stable outputs, explicit retrieval reasoning, and broader options for assessing topic quality. Reinventing Online Qualitative Methods: Lessons from an AI-Assisted Study on Pathways Out of Loneliness 1Hochschule Trier, Germany; 2Bilendi&respondi Relevance & Research Question Loneliness has emerged as a growing social and public health concern that increasingly affects younger age groups. In response, the state government of North Rhine-Westphalia has initiated multiple initiatives and established a competence network to counteract loneliness. Against this backdrop, the present study examines the role of digital technologies in both the emergence and alleviation of loneliness. The research focuses on three interconnected key questions: 1) How do technological environments, ranging from face-to-face- communication tools to digital social platforms, shape experiences of social connectedness and emotional well-being, and to what extend may they contribute to or mitigate feelings of loneliness? 2) What role do so called third-places play in individuals’ perceptions of social belonging and connectedness? 3) How are digital media used to build and maintain social relationships and under what conditions are digitally mediated interactions transferred into offline, contexts? Methods & Data The study employs a qualitative research design based on more than 150 participants, and was conducted using BARI, the qualitative AI developed by Bilendi. Participants engaged via WhatsApp or Facebook Messenger over roughly one week. BARI supported almost the entire research process, including project flow, moderation, data analysis and reporting. The AI-based moderation is methodologically notable as the absence of a human interviewer may foster greater openness when discussing sensitive topics such as loneliness, potentially reducing social desirability bias. This setup allowed collecting rich narrative data while simultaneously enabling an empirical assessment of the methodological implications of AI supported qualitative research. Results Beyond substantive insights into perceptions and experiences of loneliness, the presentation will highlight methodological findings regarding the strengths, weaknesses and challenges of AI-assisted qualitative research. The integration of participant feedback and researcher reflection will be shown to play a central role in improving the AI´s performance and refining its methodological contribution to future research. Added Value The study provides dual added value: empirically, it offers new insights into how digital media and social spaces shape loneliness; methodologically, it delivers one of the first systematic assessments of AI-moderated qualitative fieldwork, demonstrating its potential and its limitations for scalable, participant-centered online research. | ||