Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
P37: Moderation
Time:
Thursday, 19/Oct/2023:
8:30am - 10:00am

Session Chair: Emillie de Keulenaar
Location: Whistler B

Sonesta Hotel

Show help for 'Increase or decrease the abstract text size'
Presentations

ALGOSPEAK AND ALGO-DESIGN IN PLATFORMED BOOK PUBLISHING: REVOLUTIONARY CREATIVE TACTICS IN DIGITAL PARATEXT TO CIRCUMVENT CONTENT MODERATION

Claire Parnell

University of Melbourne, Australia

This paper examines the rise of algo-design in the context of platformed book publishing. Building on conceptualizations of algospeak, a strategy that involves creating code words or phrases to create a brand-safe lexicon, the paper theorizes algo-design as a broader creative strategy used by online creators that involves using and avoiding specific language and visuals to evade content moderation by platforms. Specifically, this research explores the use of algo-design in the paratext of romance and erotica novels by authors of color and LGBTQIA authors who publish their fiction on digital publishing platforms, such as Amazon, and market them on social media platforms. This exploratory reseach is based on a qualitative multi-method research design, including interviews with authors and metadata analysis. In many cases, algo-design may be seen as a revolutionary creative tactic for BIPOC and LGBTQIA authors of romance fiction, who are disproportionately affected by content moderation systems (Monea, 2022) and often have their works flagged as adult material due to the genre’s tendency to include intimate relationships (Parnell, 2021). In this way, the use of algo-design by authors is a clear effort to push back against bluntly imposed content moderation interventions and subvert platform power.



PLATFORM PR – THE PUBLIC MODERATION OF PLATFORM VALUES THROUGH TIKTOK FOR GOOD

Rebecca Scharlach

The Hebrew University of Jerusalem, Israel

TikTok wants to “inspire creativity” and “spark joy,” Meta aims to “bring the world closer together,” and YouTube aspires to “give everyone a voice and show them to the world.” Platforms claim that they want to do good. However, they regularly get international attention for being bad instead. Social media data scandals are a prominent point of research. Yet initiatives to counterbalance these backlashes, such as YouTube’s Black Voices Fund or TikTok for Good are rarely investigated. Although platform initiatives' content is often not on the top of your For You Page (FYP), such social initiatives can tell much about what values a platform aims to promote. Examining the values promoted through social initiatives of platform companies provides a way to understand what these companies try to center as important or worthwhile. This project investigates the promotion of platform values through “TikTok for Good,” based on an inductive and thematic analysis of TikTok for Good videos (n=180). With this study, I aim to explore how platform values can enhance our understanding of the construction of what a platform counts as good, what is worth being visible, and in turn, what is not.



Global Content Moderation on YouTube: A Large-Scale Comparative Analysis of Channel Removals Across Countries, Time, and Categories

Adrian Rauchfleisch2, Jonas Kaiser1,3

1Suffolk University, United States of America; 2National Taiwan University, Taiwan; 3Harvard University

Content moderation and the question of how to regulate speech on social media platforms is both an urgent as well as a complex topic that affects all forms of digital life. Yet, these questions are not a local but a global issue that pose significant challenges to social media companies such as YouTube which operate worldwide and in many different jurisdictions. YouTube does not only have to decide on its own policy guidelines but also how content should be moderated from country to country. However, it is not guaranteed that content will be moderated equally in all countries. As most social media companies are based in the United States, it is possible, for eample, that content moderation will be stricter for US content than for content elsewhere due to a higher public scrutiny. We thus argue that comparative work is sorely needed to shine a light on our understanding of how social media companies such as YouTube regulate speech not only on a national but a global level. In our analysis we investigate the question about the differences and similarities regarding content deletion on YouTube. To do so, we analyzed over 2 million YouTube channels from 68 countries over two years (2020-2022). We find both temporal trends that are consistent with YouTube’s global policy guideline changes as well as national patterns.



Mental Health and the Digital Care Assemblage: Moderation practices & user experiences

Anthony McCosker, Jane Farmer, Peter Kamstra

Swinburne University of Technology, Australia

This paper examines the socio-technical ecosystems that shape the moderation of mental health content. To explore how care is formulated across and between different actors and automated systems, I focus on the experiences of moderators and users of three peer-based mental health support platforms. The analysis is framed by the notion of the 'digital care assemblage' to delineate the interactions between goal-oriented moderation policies, automated systems, human content moderators or platform managers, and users seeking or giving help in relation to mental ill-health. Each of these actors contribute to the supportive capacity of the platforms for addressing mental health issues in the community.



Moderating (Through) Emotions: Technologies of Content Mood-eration and the Shifting Foundations of Speech Governance

João C. Magalhaes1, Holly Avella2

1University of Groningen, The Netherlands; 2Rutgers University, United States of America

Digital technology conglomerates have long been interested in instrumentalizing users’ emotions to further their corporate goals. In this paper, we shed light on a rarely discussed way whereby these companies exploit human affects: the purported identification and measurement of sentiments so as to allegedly optimize automated processes of content moderation. These technologies of mood-eration, as we term them, represent a largely unknown but widely sought approach to defining and governing objectionable speech at scale, an increasingly central and politically fraught issue for these organizations. Through the analysis of patent applications from large tech companies, we demonstrate that mood-eration seems to be based on two main techniques: firstly, the inference of emotions conveyed by content so as to identify and control objectionable communication – moderation through moods; secondly, the inference of emotions from users so as to address their somehow negative affective experiences – moderation of moods. The paper contributes to critical scholarship on digital speech governance by arguing that, in addition to creating new avenues for the unfair suppression/enabling of users’ expressions, mood-eration technologies shift the foundations of content control by reconfiguring the very idea of what is objectionable. Instead of being founded on common moral principles that demand justification, objectionability becomes a function of subjective states that defy explanations – and deflect accountability. As such, mood-eration can naturalize and depoliticize speech governance. This might be appealing for corporations – but it is also concerning for the rest of us.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AoIR 2023
Conference Software: ConfTool Pro 2.6.149
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany