Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Moderation: Platform Approaches (traditional panel)
Time:
Friday, 01/Nov/2024:
3:30pm - 5:00pm

Session Chair: Christian Katzenbach
Location: Alfred Denny Conf Room

80

Show help for 'Increase or decrease the abstract text size'
Presentations

Borderline Content and Platformised Speech Governance: Mapping TikTok’s Moderation Controversies in South and Southeast Asia

Diyi Liu

University of Oxford, United Kingdom

Content moderation comes with trade-offs and moral dilemmas, particularly for transnational platforms governing borderline content where the boundaries of acceptability are subject to debate. While extensive research has explored the legality and legitimacy of platformised speech governance in democratic contexts, few address the complexities of less-than-democratic developing nations. Through socio-legal analysis and controversy mapping of TikTok’s localised moderation in South and Southeast Asia, the study examines how major actors negotiate the shifting boundaries of online speech. The analysis reveals that neither the platform nor regional states effectively govern borderline content. Primarily, TikTok localises its moderation based on pragmatic necessity rather than moral obligations, intentionally sidestepping contentious political controversies. Governments demonstrate strong will to control online discourse, leveraging legal uncertainty to advance political interests. Local content governance thus always relies on vague rationales around securitisation and morality. The contradictory goals of (de)politicising borderline moderation seemingly counterbalance each other, yet in practice lead to an accountability vacuum without legitimate interests. Given the lack of normative common ground, the study highlights the significance of procedural justice and civic participation to mitigate rhetoric that rationalises imposition of speech norms hinging on imbalanced political power.



POLITICAL AMBIGUITY IN PLATFORM GOVERNANCE: THE SOCIOTECHNICAL IMAGINARIES OF PLATFORMS IN CHINA

Fangyu Qing, Ngai Keung Chan

The Chinese University of Hong Kong, Hong Kong S.A.R. (China)

This article introduced the concept of “political ambiguity” (Zhan & Qin, 2017) as a background mechanism or discursive origin to explain the adaptive and fragmented (Hong & Xu, 2019) platform governance in China. This phenomenon is significant in China and has its roots in the “guerrilla policy style” between 1930s and 1940s (Perry & Heilmann, 2011). Furthermore, we combined “sociotechnical imaginary” (Jasanoff & Kim, 2009) to form an imaginary angle of understanding the metaphors, promises, warnings, perils, and other future visions made by the state. This term resonates with the political ambiguity in China as its central policy was “schematic designs… started from a mathematical formula with ideal perfection.” (Huang, 1986, p.3)

Our analysis was conducted qualitatively with thematic analysis on eighty-one governmental documents ranging from 2015 to 2022. As preliminary findings, we recognized an evolving and heterogeneous landscape of imaginaries, and identified three layers of them, namely ontological, contested, and infrastructural. The ontological imaginaries reduce the uncertainty of emerging technologies and define its social position. The contested imaginaries refer to themes that are in competitive dialogues with each other. The infrastructural imaginary depicted how emerging technologies became the foundation of other policies and subsidies other political agendas. Together they constitute a directive yet ambiguous governance guideline of “inclusive, prudent, and customized regulation” (National Development and Reform Commission et al., 2017). By foregrounding the state-market relations (Steinberg et al., 2024) in China, this study contributes to understanding how political ambiguity in platform governance was strategically (re)imagined, embraced, and contested.



GPT4 v The Oversight Board: Using large language models for content moderation

Nicolas Suzor, Lucinda Nelson

School of Law, Queensland University of Technology; QUT Digital Media Research Centre; ARC Centre of Excellence for Automated Decision-Making and Society

Large-scale automated content moderation on major social media platforms continues to be highly controversial. Moderation and curation are central to the value propositions that platforms provide, but companies have struggled to convincingly demonstrate that their automated systems are fair and effective. For a long time, the limitations of automated content classifiers in dealing with borderline cases have seemed intractable. With the recent expansion in the capabilities and availability of large language models, however, there is reason to suspect that more nuanced automated assessment of content in context may now be possible. In this paper, we set out to understand how the emergence of generative AI tools might transform industrial content moderation practices. We investigate whether the current generation of pre-trained foundation models may expand the established boundaries of the types of tasks that are considered amenable to automation in content moderation.

This paper presents the results of a pilot study into the potential use of GPT4 for content moderation. We use the hate speech decisions of Meta’s Oversight Board as examples of covert hate speech and counterspeech that have proven difficult for existing automated tools. Our preliminary results suggest that, given a generic prompt and Meta’s hate speech policies, GPT4 can approximate the decisions and accompanying explanations of the Oversight Board in almost all current cases. We interrogate several clear challenges and limitations, including particularly the sensitivity of variations in prompting, options for validating answers, and generalisability to examples with unseen content.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AoIR2024
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany