Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Tech & Public Sectors (traditional panel)
Time:
Friday, 01/Nov/2024:
11:00am - 12:30pm

Session Chair: Elinor Carmi
Location: The Octagon: Meeting Room 4


Show help for 'Increase or decrease the abstract text size'
Presentations

Deletion as a Crisis Communication Practice: An Analysis of U.S. State Public Health Agencies’ Social Media Accounts during COVID-19

Muira McCammon

Tulane University, United States of America

Health communication researchers often rely on public health messaging to understand what government agencies wish to communicate in times of crisis. This research article positions deletion as a crisis communication deserving of further study and leverages the power of public records requests across 50 U.S. state-level agencies (SLAs) to typologize what prompts the erasure of posts on official government-managed social media platforms, such as Twitter. By filing U.S. Freedom of Information (FOI) requests with SLAs, it becomes possible to study the communicative struggles that unfold, as government officials scramble to negotiate, determine, and debate what types of government information are appropriate for publication (and subsequent worthy of deletion) on official Twitter accounts. By bringing health communication as a field in conversation with the granular specifics of state-level memory governance, this article also offers a method for studying the communication practices of democratic institutions on corporate social media platforms that center public-sector data infrastructure.

Elon Musk’s acquisition of Twitter has resurfaced concerns that researchers may not have reliable and affordable access to digital data, as many platform companies have eliminated free access to their Application Programming Interfaces (APIs) or enacted policies that require them to expunge all data acquired under previous agreements. While public records requests cannot replicate the work that many computational social scientists and health communication researchers have come to value, this method offers meaningful pathways for studying previous public health messaging campaigns and the tensions that arise between democratic institutions and their myriad audiences.



De-biasing algorithmic technologies in the public sector: the case of Department of Work and Pensions (DWP)

Hadley Beresford

University of Sheffield, United Kingdom

Concerns of algorithmic bias in the public sector has led to the development of ‘de-biasing’ methods which attempt to remove harmful biases from algorithmic technologies. However, it has been argued that discourses focusing on ‘bad’ algorithms and ‘bad’ data limits practitioners’ ability to recognise how data and algorithms connect to wider issues of injustice (Hoffman, 2019). To counter this, it has been suggested data practitioners must adopt socio-technical algorithmic bias.

To date, little research has been conducted to understand how data practitioners perceive socio-technical algorithmic bias mitigation tools, and the challenges present in adopting them in a civil service context. I discuss my initial findings from a qualitative project which investigated how civil servants perceive socio-technical algorithmic bias mitigation approaches. The data for this paper were collected through conducting a series of seven educational workshops on algorithmic bias mitigation, and seven follow up interviews, in the UK government department the Department of Work and Pensions (DWP). My findings suggest is difficult for civil service practitioners to align technologies to the social justice values which underline socio-technical bias mitigation approaches when servicing a large diverse public. Furthermore, civil service practitioners’ room for action is limited by the political structures they work within, and government policy approaches may sometimes be in opposition to social justice values.



The Technopolitics of Waiting: Case Studies of AI Training in China and Homeless Services Systems in the U.S.

Pelle Tracey, Ben Zefeng Zhang, Patricia Garcia, Oliver Haimson, Michaelanne Thomas

University of Michigan, United States of America

Many theorists of the information economy have argued that digitization has resulted in a “speeding up” of our experience of time (i.e. Gleick, 1999). This work contends that for many, especially those with less power, the techno-utopian vision characterized by datafication and Artificial Intelligence (AI) instead produces a state of prolonged waiting. Drawing from two long-term ethnographic studies examining the production and implementation phases of data-driven technologies in China and U.S., we demonstrate how the “long-standing but mistaken belief, hegemonic in Silicon Valley, that automation will deliver us more time” (Wajcman, 2019) belies the highly stratified temporal impacts of automation, datafication, and AI. Specifically, this work uses AI training and the homeless services system as case studies to reveal the politics of waiting; despite the promise of data-driven technologies, pervasive waiting serves as evidence of an enduring residue—an unequal power structure. Our findings also suggest that the technologies which mediated the experience of waiting in the first, more immediate sense, also impacted how people conceptualize the future.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AoIR2024
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany