As organisations across governments, GLAM, other industries and civil society grapple with how to responsibly and ethically work with AI technologies, they risk overlooking a fundamental challenge: that AI, and responsible AI, mean many things to many people, and may never benefit from a single agreed definition. With their stewardship of knowledge and intellectual property, their vital role in shaping cultural memory, and their hard-won public trust, GLAM organisations have an essential leadership role to play in shaping emerging AI uses and in crafting a culture of situated, responsible, and critical engagement with AI in all its forms.
This workshop draws on transdisciplinary research on the nature, meaning and practice of responsible AI to clarify ethical issues and assist those working in GLAM and adjacent sectors to recognise that responsible AI is an ongoing process. Workshop participants will engage in creative dialogue on the shared understanding, culture change, and interdisciplinary thinking needed to bring ethical / responsible AI principles into everyday practice, and to ensure a foundation of critical praxis in how GLAM engages with AI.
Constant Washing Machine and the FRAIM project
The workshop will be based on Constant Washing Machine, an interactive artwork by artists Blast Theory that conceptualises responsible AI through the metaphor of soaps engraved with key responsible AI phrases. It captures the everyday, social nature of the decisions that affect responsible AI in practice and the slipperiness of language about AI.
Constant Washing Machine was commissioned as part of the University of Sheffield’s Framing Responsible AI Implementation and Management (FRAIM) project, funded by the Arts and Humanities Research Council as part of the Bridging Responsible AI Divides (BRAID) network. FRAIM partnered with four non-academic organisations representing AI use and leadership at local, national and international level in public, private, and third sectors in the UK: Sheffield City Council, the British Library, the Open Data Institute, and Eviden.
FRAIM sought to identify the key questions and stakeholders in making responsible use of AI more tangible and practical for organisations, and to understand what questions are and are not being asked about what "responsible AI" means, and to whom. Our research revealed considerable variation in definitions of responsible AI: 80 AI policies reflect over 30 principles, of which only ten were present in over half of the documents. We found marked sectoral differences in how responsible AI was defined, reflecting local needs and interests. In-depth interviews within partner organisations revealed variations in how ‘responsible AI’ is interpreted, even within individual organisations.
Constant Washing Machine engages with responsible AI from this starting point of fragmentation and difference. With different words engraved on malleable bars of soap, it materialises the diverse and ever-changing nature of responsible AI in practice. Its evocation of daily hygiene reflects how ethics and responsibility in AI draw from everyday practice, and illustrates the need for organisations to consciously construct a culture of ‘AI hygiene’ that engages internal staff and external stakeholders in the contextual nature of responsible AI.
Workshop
The formal, instructor-led interactive workshop will open with a brief summary of the FRAIM research and findings. It will reflect on the vital role of artists as co-researchers and of transdisciplinary partnership for understanding responsible AI in practice. An artist-defined set of bowls, jugs of water, spotlights, digital portraits and a table will provide the setting. Participants will be invited to engage in the participatory hand-washing activity, as hand-washers or observers, with bespoke ‘art’ soaps.
Small group discussions will follow, inviting participants to reflect on the project findings and their participant experiences. Using two rounds of the world cafe method, participants will discuss and document their thoughts about the implications and operationalisation of key ethical principles, followed by a plenary to synthesize group thoughts. Finally, a feminist silent brainstorm will aim to ensure inclusive collection of contributions.
Brief reflections from the instructors will conclude the workshop, with an opportunity for open reflection from participants to share any final thoughts on responsible AI and the future of ethical and responsible AI practice.
The Constant Washing Machine workshop will create a playful and accessible format for staff at all levels and with any degree of AI or data literacy to open up key questions which might inform conversations about responsible AI within their organisations on an ongoing basis.
Planned outcomes
Participants will gain practical and philosophical understanding of responsible AI: how it is shaped by the different people, contexts and practices. The interactive experiences and group discussions are designed to improve understanding of the multiple concurrent viewpoints on responsible AI.
Participants’ can expect greater shared understanding and (responsible) AI literacy via concepts such as transparency, sustainability, inclusion and human centeredness for GLAM professional work involving AI. This is necessary for GLAM to maintain a reputation for trustworthiness and integrity while engaging with AI developments in both their host organisations and as service providers.
By engaging with the Constant Washing Machine artwork, participants will also become more able to consider how interdisciplinary and arts-led strategies can progress internal organisational and public engagement potential in relation to supporting the culture change necessary to adapt and thrive in AI-mediated workplaces and cultures.
Discussion notes will be collated by the instructors and shared with interested participants in followup to the workshop. This will feed into the wider FRAIM work which includes mapping our sector-level analysis to the areas of policy action highlighted in the UNESCO (2021) Recommendation on the Ethics of AI, and further mapping the responsible AI principles identified in our study to the AI Thinking framework for AI use in practice (Newman-Griffis, 2025).
This workshop will broaden the conversation on ethical and responsible AI, also making it more tangible and actionable for participants. It will help participants to formulate, identify and explore questions that can offer transformational potential for any gallery, library, archive, or museum that wants to be critically engaged, self-aware and ethical in engaging with AI, and that is unafraid to explore responsible AI futures, exercising care for all involved.
Intended audience
The workshop is targeted to practitioners and scholars with an interest in the ethics of AI and an interest in promoting discussion of what responsible AI means in their context through internal and external conversations.
What participants need to prepare
No preparation is required.
Resources
We will supply Constant Washing Machine art objects including: selected responsible AI words and phrases, digital portrait presentation (own laptop or projector if available), engraved responsible AI soaps, and a hand-washing exercise / performance. It would be helpful to have access to a table and access to water, but can bring our own. A room with cabaret-style layout to promote small group discussion in the world cafe sessions would be preferred if possible.
Instructors
Hannah Redler Hawes is the FRAIM curator. She is a museum professional specialising in interdisciplinary digital projects.
Matt Adams is an artist at Blast Theory, an award-winning artists’ collective exploring the impact of technology on our lives.
Andrew Cox is a Senior Lecturer in the University of Sheffield Information School and a Co-Lead on FRAIM.
Susan Oman is Senior Lecturer, and Theme Lead for Human-Centric AI in the University of Sheffield Centre for Machine Intelligence and a Co-Lead on FRAIM.
Denis Newman-Griffis is Senior Lecturer, and Theme Lead for AI-Enabled Research in the University of Sheffield Centre for Machine Intelligence and the Project Lead on FRAIM.
Selected references
Friends of the Earth (2025) Harnessing AI for environmental justice. https://policy.friendsoftheearth.uk/reports/harnessing-ai-environmental-justice
Hagendorff, T (2024) Mapping the Ethics of Generative AI: A Comprehensive Scoping Review. https://doi.org/10.48550/arXiv.2402.08323
Metcalf et al (2019) Keeping humans in the loop: pooling knowledge through artificial swarm intelligence to improve business decision making. California management review, 61(4), 84-109.
Mökander, J & Floridi, L (2023) Operationalising AI governance through ethics-based auditing: an industry case study. AI and Ethics, 3(2), 451-468.
Pant, A et al (2024) Ethics in the age of AI: An analysis of AI practitioners’ awareness and challenges. ACM Transactions on Software Engineering and Methodology, 33(3), 1-35.
Murphy, O & Villaespesa, E (2020) AI: A Museum Planning Toolkit. Goldsmiths.
Newman-Griffis, D (2025) AI Thinking: a framework for rethinking artificial intelligence in practice. Royal Society Open Science, 12(1), 241482.
Oman, S (2021) Understanding well-being data: improving social and cultural policy, practice and research. London: Palgrave Macmillan. https://doi.org/10.1007/978-3-030-72937-0
Oman, S (2024) Digital culture – a review of evidence and experience, with recommendations for UK policy, practice and research. London: DCMS. https://assets.publishing.service.gov.uk/media/6724eb3ec053e87b6a0a824a/Digital_culture_report_2024__3_-accessible.pdf
Sadek, M et al. (2025) Challenges of responsible AI in practice: scoping review and recommended actions. AI & Soc 40, 199–215 https://doi.org/10.1007/s00146-024-01880-9
UNESCO (2021) Recommendation on the Ethics of Artificial Intelligence.
University of the Arts London and Van Abbemuseum (2023) Transforming Collections Rewinding Internationalism.