The development of artificial intelligence (AI) systems and their deployment in society has given rise to serious ethical dilemmas and existential questions. The previously unimaginable scale, scope, and speed of mass harvesting of data, the black-boxed classification logics of the data, the exploitation of ghostworkers, and the discriminatory uses of the systems have brought up concerns that the AI systems reproduce and amplify social inequalities and reinforce the existing structures of power (see e.g. Benjamin, 2019; Gray & Suri, 2019; Crawford, 2021; Heilinger, 2022). Moreover, AI contributes in a significant way to the planetary crises, from the mining of raw minerals and massive energy consumption to the vast amount of e-waste (Perkins et al., 2014; Crawford, 2021; Taffel, 2023; de Vries, 2023). AI-generated images have flooded social media feeds (Kell, 2023; Lu, 2023), deeply impacting access to information, knowledge generation, and the spreading of misinformation (Partadiredja, Serrano & Ljubenkov, 2020; Whittaker et al., 2020). The lack of regulation and security concerns has led to policies to curtail GenAI use in news organisations and universities (Weale, 2023; WIRED, 2023). Due to these problems, the adoption of GenAI in academia needs careful and critical deliberation.
In the workshop, we engage in reflecting on the ways generative artificial intelligence (GenAI) could be used as a method for critical research, and what ethical and practical considerations are implied. We approach GenAI following Kate Crawford’s definition of AI as “a technical and social practice, institutions and infrastructures, politics and culture” (Crawford, 2021: pp. 8). The focus of the workshop is on GenAI (applications such as ChatGPT, Dall-E, Midjourney, Gemini) which use complex algorithms to learn from training data libraries and, when prompted by users, produce media outputs such as text, images, and sounds.
The concerns for critical researchers interested in using GenAI as a method are numerous: The development of the systems and applications has been widely driven and controlled by the tech industry. Their monopoly in the market means that the companies profit not only from the services they sell but also from the technological knowledge they produce (Baradaran, 2024). Simultaneously, the proprietary systems limit the choices for researchers and users, making it almost impossible to investigate the social and ecological sustainability of the systems or the ethics of their technical construction. Given the massive budget that the tech industry has spent in recent years and its exponential projections for the near future (e.g. Nienaber, 2024; Grace et al., 2024), it can be assumed that the mainstreaming of AI applications from predictive technologies to GenAI will continue their domestication to users’ everyday lives and different fields including administration, policing, health care, education, journalism, and academic research. Thus, there is a need for a deeper understanding of the entire AI system, and us as researchers thoroughly engaging in a process of self-reflection on what our role in it is and should be. In the workshop, we will address questions such as:
• How does GenAI reflect and produce social relations and understandings of the world? How do we unpack what is and is not meaningful to understand in the datasets and classifications?
• What are the political economies of the construction of AI systems? What are the wider planetary consequences? How should researchers address these issues when working with GenAI?
• How can we resist the hegemonic and often naturalised narratives of the AI industry and provide alternatives with critical research? How can critical researchers engage in decolonising AI systems?
• How can critical research interventions participate in the radical reimagining of AI's technological development and role in society?
.
We invite everyone interested in the topic (no previous experience with GenAI is required) to come and explore possibilities and concerns of using GenAI in research, share ideas, identify alternatives, experiment with a GenAI method, and network. The full-day workshop will consist of three interlinked parts. In the morning, we review some of the central questions collaboratively using the method of ’a world café’ (https://theworldcafe.com), followed by a summary of the key concerns in the field of critical GenAI research. After lunch, the participants have a chance to experiment with a workshop method developed by the facilitators in which an AI image generator is used to imagine sustainable digital futures. Organisers of the workshop will provide tablets with the GenAI application. In the final session, we share reflections and further elaborate on the method experiment, discuss the needs and concerns for critical research, and have space for networking and exchanging ideas. The maximum number of participants in the workshop is 24.