Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
GPT & LLMs (traditional panel)
Time:
Friday, 01/Nov/2024:
3:30pm - 5:00pm

Session Chair: Bernhard Rieder
Location: INOX Suite 1

50 attendees

Show help for 'Increase or decrease the abstract text size'
Presentations

CHEATGPT? THE REALITIES OF AUTOMATED AUTHORSHIP IN THE UK PR AND COMMUNICATIONS INDUSTRIES

Tanya Kant

University of Sussex, United Kingdom

Drawing on interview and survey data from content writers in the UK communications industries, this paper critically and empirically explores content writers' engagements with generative text AI in relation to creative authorship and expertise. The project will utilize a critical framework of algorithmic literacy to consider avenues for empowering so-far overlooked stakeholders of AI tool use in this creative industry sector.

The paper presents findings from a survey of 1,074 PR and communications content writers and their managers/ employers and from 21 follow-up interviews with the same stakeholders. It will explore the realities of automated authorship and the opportunities and limitations that algorithmic literacy might bring in enhancing smaller stakeholder algorithmic empowerment and expertise. Findings suggest that a) generative text AI is increasingly being used by content writers in ways that challenge speculative forecasts of generative text use and that b) these tools are useful for saving time, idea generation and synthesising existing text, but cannot (yet) be used to replicate or generate authorially convincing tone of voice or brand identity. Such findings suggest that critical algorithmic literacy could be used to create dialogues in workplaces that foreground the problems related to automated authorship - especially in terms of promoting human expertise, challenging algorithmic power and reforming the boundaries of creative subjectivity.



GPT and the Platformization of the Word: The Case of Sudowrite.

Daniel Whelan-Shamy

Queensland University of Technology, Australia

In this extended abstract, I argue that OpenAI ’s Generative Pre-Trained Transformer (GPT) Large Language Models (LLMs) are increasingly being positioned as platforms through the extension of various GPT models into different platforms and applications. My interest is directed at applications that are increasingly playing a part in processes of writing and authorship within creative and cultural industries and is particularly focused on the role of LLMs in the creation of texts and how they are potentially shaping this process. To empirically ground my research and its arguments, I apply the walkthrough method on the writing application Sudowrite. This work has a dedicated interest in responding and contributing to growing scholarly conversations around Artificial Intelligence technologies and various forms of power. An overarching hypothesis of this work is that the enrollment of a profit-driven platform company such as OpenAI in the creative process begets a position of power in that if GPT becomes basic infrastructure for writing and authorship, it will, potentially, embed and naturalise certain ways of doing and organising written communication, creativity, and expression in the interests of corporate power.



Assessing Occupations Through Artificial Intelligence: A Comparison of Humans and GPT-4

Paweł Gmyrek1, Christoph Lutz2, Gemma Newlands3

1International Labour Organization, Switzerland; 2BI Norwegian Business School, Norway; 3University of Oxford, UK

Large language models (LLMs) such as GPT-4 have raised questions about the changing nature of work. Research has started to investigate how this technology affects labor markets and might replace or augment different types of jobs. Beyond their economic implications in the world of work, there are important sociological questions about how LLMs connect to subjective evaluations of work, such as the prestige and perceived social value of different occupations, and how the widespread use of LLMs perpetuate often biased views on the labor markets reflected in their training datasets. Despite initial research on LLMs’ world model, their inherent biases, attitudes and personalities, we lack evidence on how LLMs themselves evaluate occupations as well as how well they emulate the occupational evaluations of human evaluators. We present a systematic comparison of GPT-4 occupational evaluations with those from an in-depth, high-quality survey in the UK context. Our findings indicate that GPT-4 and human scores are highly correlated across all ISCO-08 major groups for prestige and social value. At the same time, GPT-4 substantially under- or overestimates the occupational prestige and social value of many occupations, particularly emerging occupations as well as stigmatized or contextual ones. In absolute terms, GPT-4 scores are more generous than those of the human respondents. Our analyses show both the potentials and risks of using LLM-generated data for occupational research.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AoIR2024
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany