WORLDMAKING BEYOND AI: SPECULATIVE FAILURE AS HOPEFUL ANALYTIC
Stephen Yang
University of Southern California, United States of America
This paper develops speculative failure as a hopeful pragmatics for imagining futures beyond AI's calculative paradigms in the present. Departing from the solutionist calls that focus on how to fix problems with AI (or how to fix AI so they can fix our problems), this paper will illustrate the utilities of speculative failure in freeing us from the problem-solution bind that has come to undergird our imaginations of what AI (and technology more broadly) could be. I embrace a speculative lens in making visible the scenes of failures that are forgotten, have yet to take place, or will otherwise never see the light of day, which, in turn, serve as symptoms to surface the assumptions that undergird the endurance of AI’s infrastructural operations in the present. In addition, by freeing us from rational and calculative controls over the future, speculative failure reorients our attention toward the uncertain yet hopeful futures beyond. This paper sketches out two ways speculative failure can help us see the future otherwise — (1) by surfacing untold failures from the past to the present, and (2) by imagining potential failures from the present to the future.
AI-GENERATED MUSIC AND THE LISTENING SUBJECT: RUPTURES IN CREATIVITY, DIGITAL LABOR, AND ALGORITHMIC LISTENING
Ian Dunham
Kennesaw State University, United States of America
AI-generated music challenges traditional notions of creativity, digital labor, and listening subjectivity. As streaming services and AI music platforms like Google’s Dream Track, Suno, and Udio grow, AI-generated compositions are increasingly indistinguishable from human-made music. This shift raises questions about artistic authorship, labor displacement, and the broader implications of algorithmic listening. Using psychoanalytic media theory, particularly Lacan’s concepts of the sonorous envelope and the acoustic mirror, this study argues that AI-generated music disrupts identity formation by replacing human affect with machinic simulation. Economically, AI music accelerates labor precarity, as platforms monetize AI compositions while sidelining human musicians. Algorithmic recommendation engines further entrench passive listening habits, prioritizing engagement-driven content over artistic expression. Scholars and industry voices, including Rick Beato and Ted Gioia, critique AI music’s tendency toward formulaic production and the monopolization of creative labor by big tech. The study situates AI-generated music within a historical lineage of technological disruptions in music, from player pianos to MIDI, demonstrating how AI follows patterns of labor displacement. Ultimately, this paper argues that AI’s role in music reflects a larger shift in digital capitalism, where platform-driven control reshapes cultural production. It calls for critical engagement with AI’s impact on creativity, labor, and listening practices, raising key questions about transparency, regulation, and artistic agency in an AI-dominated music industry. Future research should explore the geopolitical dimensions of AI-generated music and its effects on global music economies.
Rupturing "AI for Good": A Feminist Decolonial Theoretical Framework for Analyzing AI Interventions in Gender-Based Violence
Lucia Fernanda Mesa Velez
Justus Liebig University Giessen, Germany
This paper develops a critical theoretical framework for analyzing "AI for Good" interventions addressing gender-based violence (GBV) from feminist and decolonial perspectives. While empirical studies demonstrate artificial intelligence (AI)'s potential benefits for GBV interventions, critical scholarship reveals how these technologies often reproduce harmful power structures. The proposed framework bridges this divide through four interconnected analytical dimensions: epistemological foundations (examining whose knowledge shapes interventions), material infrastructure (revealing extractive processes enabling AI systems), governance mechanisms (analyzing decision-making structures), and contextual embeddedness (understanding cultural specificities).
Based on a critical discourse analysis of AI ethics and governance frameworks, “AI for Good” literature, and GBV intervention documents and empirical studies, and a literature review of feminist science and technology studies and decolonial approaches to technology, the framework operationalizes feminist analyses that expose design choices that embed gender-based assumptions and limit accessibility (Klein & D’Ignazio, 2024; Costanza-Chock, 2020; Criado-Perez, 2019) and decolonial perspectives that reveal how infrastructures interact with institutional systems to perpetuate colonial structures (Couldry & Mejias, 2018; Ricaurte, 2019; Mohamed et al., 2020; Png, 2022; Madianou, 2025). This framework challenges "technology for good" narratives that function as moral shields obscuring exploitative data extraction and power asymmetries (Madianou, 2025; Muñoz, 2022). By translating abstract theoretical concepts into concrete analytical tools, the framework enables researchers to systematically examine power dynamics in AI interventions, center survivor knowledge, and imagine alternative approaches that challenge rather than reinforce existing hierarchies in addressing gender-based violence.
DeepSeek AI Meets Divination: Algorithmic Syncretism, Data Consecration, and Accuracy Politics
Silei Zhu
Rutgers University, China, People's Republic of
Following DeepSeek AI’s release in early 2024, AI-powered fortune-telling rapidly gained popularity on Chinese digital platforms like Rednote, where users shared AI-generated divinations, refined prompts, and debated result accuracy. This phenomenon raises critical questions about how accuracy is understood when AI is repurposed for metaphysical insight rather than rational computation. Unlike conventional AI applications, where accuracy is defined through explainability and predictive efficiency, AI fortune-telling reconfigures accuracy as a subjective, relational, and ineffable construct. This study examines algorithmic syncretism, data consecration, and accuracy politics as three key dimensions that shape user experiences, highlighting how AI is framed as a mystical authority rather than a transparent tool. By centering explainable AI and ineffability as a theoretical framework, this in-progress paper argues that trust in AI divination does not rely on technical transparency but on the interpretative space it creates for users, where meaning emerges through engagement rather than explanation. The rise of AI-powered divination suggests a shift in algorithmic epistemology, where black-boxed AI systems are embraced not in spite of their opacity, but because of it.
|