Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
6.4: LLMs as Analysis Tools
Time:
Tuesday, 01/Apr/2025:
5:00pm - 6:00pm

Session Chair: Jeldrik Bakker, Statistics Netherlands, Netherlands, The
Location: Hörsaal D


Show help for 'Increase or decrease the abstract text size'
Presentations

Going Multimodal: Challenges (and Opportunities) of Streamlining Deliverable Production with AI

Georg Wittenburg1, Sophia McDonnell2

1Inspirient; 2Verian

Relevance & Research Question

Among end clients and decision makers, each individual engages differently with the results of market research studies or opinion polls: Some read the headlines, some look at the charts, some read the entire report. The Artificial Intelligence (AI) community has made strides in automating text generation, but their promise of efficiency gains comes with the caveat of lacking trustworthiness. Hence, for this contribution we ask three questions: How can we leverage AI to accurately describe our quantitative results? How can we tune this output so that it helps us produce reports more efficiently? How can we ensure AI-generated text can be trusted to be correct?

Methods & Data

Verian Germany and Inspirient have worked together for the past three years to make Generative AI applicable to quantitative survey data through automated statistical reasoning. These prior results comprise both visual output of analyses (incl. charts) as well as corresponding formal chains of reasoning steps, which ensures that results can be linked back to source data and thus trusted. We now combine these into “speaker notes” for a Large Language Model (LLM) that we then utilize to generate descriptive textual output for each analytical result.

Results

Our system is able to generate descriptive text for typical charts that one may find in a survey report. We explain how to set up LLMs to accurately link their output back to their speaker notes, and how to control for this as part of post-processing. In our evaluation, we illustrate which AI speaker notes are required for which kind of output, which aspects can be controlled via prompting, and we discuss in how far client-ready output is achievable.

Added Value

While our approach does not match the nuanced writing style and proficiency of an experienced human researcher, we can claim with confidence that the speed-up in getting to draft-level output is tremendous – in particular for lengthy reports. We thus envision a setup in which researchers merely need to fine-tune an AI-written draft report, incl. charts and accompanying text, while knowing that the factual statements in this deliverable can be trusted.



Meet Your New Client: Writing Reports for AIs

Georg Wittenburg1, Paul Simmering2, Oliver Tabino2

1Inspirient; 2Q Agentur für Forschung

Relevance & Research Question

As organizations adopt Retrieval-Augmented Generation (RAG) for their Knowledge Management Systems (KMS), traditional market research deliverables face new functional demands. While PDFs of reports and presentation slides have effectively served human readers, they now are also “read” by AI systems to answer questions of human users—a trend that will only increase going forward. In order to future-proof the reports that are delivered today, this study evaluates information loss when transferring market research insights through different delivery formats into RAG systems. This open question emerged from a discussion at the DGOF KI Forum between market research buyers and suppliers.

Methods & Data

We frame the transfer of information, incl. research insights, into clients’ KMS as a signal processing problem. The fidelity of the information transfer depends on the data format: Some formats, e.g., pictures of charts, incur an information loss while other formats, e.g., tables, do not. We model this loss using benchmarks for information extraction from different file formats and from graphs. Further, we assess the needs handled by current reporting formats and contrast them with new needs from RAG. This is done through expert interviews and an analysis of research reports from different institutes.

Results

Findings indicate that classic formats, while valuable for human interpretation, are not optimal for AI systems. Key limitations include difficulties in extracting information from graphs and styled slides, which lead to altered, de-contextualized, or lost information. Text-heavy reports offer greater compatibility, yet are not optimal either, e.g., when methodology is presented separately from results. Our study suggests that transitioning to complimentary special-purpose deliverables, designed explicitly for AI, enhances the retrieval accuracy of research insights within KMS, and thus for the client.

Added Value

The choice of reporting format is critical for delivering insights to market research clients, especially now that these reports will also be consumed by AI. This study yields insights on new demands and improved formats for reports from suppliers. It also supports buyers of reports in their assessment of proposals and effective ingestion of results into their KMS for optimal information retrieval going forward.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: GOR 25
Conference Software: ConfTool Pro 2.8.105
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany