Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Transformative Tools, Emerging Challenges: Empirical & Practical Experiences with LLMs for Text Classification and Annotation (panel proposal)
Time:
Saturday, 02/Nov/2024:
11:00am - 12:30pm

Location: Alfred Denny Conf Room

80

Show help for 'Increase or decrease the abstract text size'
Presentations

Transformative Tools, Emerging Challenges: Empirical and Practical Experiences with Large Language Models for Text Classification and Annotation in Communication Studies

Tariq Choucair1, Ahrabhi Kathirgamalingam3, Fabienne Lind3, Jana Bernhard3, Hajo Boomgaarden3, Bruna Oliveira2, Rousiley Maia2, Laura Vodden1, Katharina Esau1, Axel Bruns1, Sebastian Svegaard1, Kate Farfan1, Hendrik Meyer4, Cornelius Puschmann5, Michael Brüggemann4, Fabio Giglietto6, Luca Rossi7, Nicola Righetti6, Giada Marino6

1Queensland University of Technology, Australia; 2Federal University of Minas Gerais, Brazil; 3University of Vienna, Austria; 4University of Hamburg, Germany; 5University of Bremen, Germany; 6University of Urbino, Italy; 7IT University of Copenhagen, Denmark

Advancements in Large Language Models have been showing important research opportunities within the field of communication studies. It offers the capacity to conduct large-scale content classification and annotation with low computational expertise and reduced manual coding efforts, potentially allowing more possibilities for researchers in social sciences to explore understudied topics (Bail, 2023; Chang et al., 2024). Because of its functioning and vast domains and language training, LLMS also potentially unlocks more generalizable, complex, and diverse analyses across various communication materials than previous computational tools and approaches (Chang et al., 2024). These materials encompass a wide spectrum, ranging from journalistic content to the digital discourse of political actors and social media conversation threads. At the same time, LLMs also raise important concerns with potential biases, data privacy, models’ transparency, environmental impact, and power imbalances (Jameel et al., 2020; Fecher et al., 2023). Although highly discussed recently, as a recent topic LLMs still need deeper theoretical elaboration and dialogue between empirical investigations specifically for communication scholars (Gil de Zúñiga et al., 2024; Guzman and Lewis, 2020).

Our panel assembles a collection of case studies that harness LLMs to tackle text classification and annotation tasks related to media and communication problems, issues, and topics. These research papers engage in an exploration of: (a) pipeline structuring: diverse methodologies for structuring effective pipelines tailored to this form of analysis; (b) tools and models comparison: comparisons of the various LLMs tools and models available for text classification and annotation, highlighting their strengths and weaknesses; (c) optimal variables and tasks: identifying the variables and tasks where LLMs demonstrates exceptional performance and reliability; (d) limitations: discussions on the existing limitations of these tools, including limitations related to specific tasks, variables, languages and data formats; (e) prompt development: strategies for developing, adapting and adjusting prompts that allows better results for specific tasks; and (e) ethical and political dimensions: an examination of the ethical and political considerations inherent in the deployment of LLMs in communication research.

This panel puts together valuable efforts of different research groups across the world to not only use, but also reflect on the use of LLMs in Communication studies. They show important avenues for the field to think about different approaches to validity, ethics and truthful cooperation between humans and computational models without erasing the challenges of doing so, and the disagreements - not only between humans, but between humans and their computational language models too.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AoIR2024
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany