The study of polarization, conflict, and ideological divergence has long challenged scholars across media, communication, and political science. Understanding these phenomena requires engaging with fundamental questions about how opinions are formed, reinforced, and contested in public discourse, and how language and discourses can have so many different meanings and interpretations. Traditional content analysis methods often try to capture this complexity - whether in large-scale media narratives, open-ended survey responses, or political discourse on social media. Computational approaches offer new possibilities, but also raise critical concerns about validity, interpretability, and methodological rigor (Baden et al., 2022; Boumans & Trilling, 2016).
Large Language Models (LLMs) have become increasingly central to communication research, with their capacity to process vast amounts of text, identify underlying patterns, and assist in qualitative coding (Chew et al., 2023; Alizadeh et al., 2025). Researchers have explored their use in a variety of tasks, from detecting polarization in open-ended survey responses to analyzing media frames and political discourse (DiGiuseppe & Flynn, 2025; Marino & Giglietto, 2024). However, the integration of LLMs into content analysis presents challenges: How well do they align with human interpretations? Can they enhance research beyond automation? And what role should they play in investigating contested or ambiguous meanings (Pilny et al., 2024; Gunes & Florczak, 2025)?
This panel moves beyond discussions of mere optimization and accuracy, instead critically and deeply discussing the use of LLMs to engage with conflict, disagreement, and interpretive diversity. Instead of seeing them as tools to impose consensus, we ask how researchers can use and interact with them to reveal tensions, challenge assumptions, and contribute to new methodologies (Dai et al., 2023; De Paoli, 2024). Across four studies, we assess LLMs mediating, amplifying, or reframing scholarly debates on the methods to analyse contentious political and social issues. Our discussion examines both the benefits and risks of these approaches, raising questions about the role of AI in media and communication methodologies.
Paper 1 present a computational approach to measure issue polarization from open-ended survey responses, leveraging Large Language Models (LLMs) to systematically code viewpoints on contentious topics such as climate change, trans rights, or political discourse on Ukraine. While traditional polarization research often relies on close-ended survey measures, this study explores how LLMs can introduce methodological complexity by computing nuanced, interpretive tensions in unstructured responses. The research highlights how LLMs not only enable large-scale automated coding but also challenge conventional measurement frameworks by accommodating ambiguity and ideological fluidity. Preliminary results underscore the potential of LLMs to expand analytical possibilities beyond traditional survey measures, reframing how scholars engage with disagreement and conflict in media and communications studies.
Paper 2 examines how LLMs can expand the scope of content analysis and assisting researchers in analyzing framing. The study applies Meta’s Llama-3 model in a two-stage approach to study climate movements in Australian news coverage, using few-shot prompting to first extract frame elements - such as problem definitions, causes, and blame attributions - and then synthesizing these elements into coherent frames. While the human coders tended to construct more issue-specific and varied frames, LLM-generated outputs were generally broader in scope but more internally consistent across the dataset.
Paper 3 presents a literature review examining how LLMs are transforming content analysis workflows in social sciences. It identifies four key modes of LLM integration: scalable coders, human-assistive collaborators, autonomous decision-makers, and tools for semantic clustering. The study highlights the ongoing challenges of ensuring interpretability, reliability, and epistemic authority when LLMs are applied to human-generated texts.
Paper 4 applies LLMs to investigate the role of political narratives and user engagement on social media during Brazil’s 2022 presidential election and the January 8, 2023, coup attempt. The study analyzes over 12 million social media posts, clustering content based on sentiment, audience reactions, and dissemination patterns to assess how different narratives are amplified. The research examines whether the framing of political content influences audience interaction and engagement levels.
Together, these studies push the boundaries of how LLMs are integrated into research on political communication, polarization, and media analysis. They provide an assessment of LLMs’ methodological potential and risks - not just as tools for efficiency, but as channels to rethink how we engage with contested meanings in communication research.