Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 13th May 2026, 06:55:13pm BST
|
Agenda Overview |
| Session | |
Digital Policy 03: EU Institutions and Digital Policy Making 2
| |
| Presentations | |
Protean Power and the EU AI Act: Policymaking under Radical Uncertainty Jagiellonian University, Doctoral School in the Social Sciences, Poland This study investigates a research puzzle: how did the EU successfully extend regulatory authority over Artificial Intelligence (AI) through the AI Act and related policies, despite expectations of bureaucratic incrementalism in the face of technological uncertainty and intense pressure from external actors? Contrary to predictions from EU decision-making and grand integration theories – which anticipate cautious and incremental responses under uncertainty – the EU produced a comprehensive and, in some ways, innovative regulatory framework. To explain this outcome, the study builds on the literature on the recently developed concepts of control power and protean power (Katzenstein & Seybert, 2018; Guzzini, 2020; Adler, 2020; Sjursen, 2025) and on emerging empirical contributions in this field (Casier, 2025; Lecocq & Müller, 2025). Control power operates within calculable risk or “operational uncertainty,” where actors deploy established capabilities to direct and stabilize outcomes through strategic planning and risk management. In contrast, protean power thrives under radical uncertainty, manifesting in agile, improvisational, and innovative responses by actors who actualize new possibilities in a landscape where outcomes are incalculable ex ante but traceable ex post. In terms of research design, the EU’s AI policy constitutes a most-likely case for the operation of protean power, as it confronts radical technological and geopolitical uncertainty. At the same time, the EU’s innovative responses in policymaking may be constrained by bureaucratic, rule-dependent, decentralized, and intergovernmental factors, which makes it an especially interesting case for studying control-protean dynamics (Pomorska & Thomas, 2025). Methodologically, to trace the causal mechanisms underlying this relationship, the study employs explaining-outcome process tracing of EU AI policymaking since 2017, combining systematic mechanisms with case-specific factors. The analysis begins with a hypothesis-testing logic focused on protean power: uncertainty as perceived by actors conditions whether they rely on control-oriented or protean-oriented practices. It then shifts to a hypothesis-modifying approach by accounting for moderators such as institutional openness to external actors; disruptive events (such as the release of ChatGPT); and constraints arising from institutional routines and path dependency. The study triangulates elite and expert interviews with EU AI policymakers across the European Commission, the European Parliament, and the Council, alongside qualitative content analysis of official documents and public consultations. By identifying the specific mechanisms through which protean power is generated and exercised within a complex bureaucracy, the article refines the concept of protean power and demonstrates its analytical value for understanding innovative responses in international organizations. General-Purpose AI Regulation: The European Parliament and the AI Act University of Portsmouth, Belgium This paper explores how the European Parliament (EP) influence and actually impacted the regulatory treatment of general-purpose AI (GPAI) models and systems in the EU Artificial Intelligence Act. It analyses how EP actors formed their preferences and what discursive strategies they used to propagate stronger obligations, transparency mechanisms for GPAI providers and measures against discriminatory outcomes through the EP’s proposed amendments at first reading in June 2023. The paper employs a sequential mixed-methods design combining document analysis with a structured survey of stakeholders involved in the AI Act process and semi-structured interviews with selected MEPs, EP staff, EU institutions representatives and civil society representatives. Preliminary findings indicate that the EP successfully negotiated the definition of GPAI and the inclusion of new provisions related to general-purpose AI systems. The EP advocated for further rules on documentation, risk management, and transparency obligations for GPAI providers. EP actors framed GPAI as a systemic risk requiring upstream accountability, drawing on civil society input and previous resolutions on digital ethics. The EP’s insistence on horizontal obligations for GPAI providers marked a significant expansion of the Act’s scope and reflected broader concerns about foundation models and algorithmic opacity. The EP’s proactive stance on GPAI regulation demonstrates its capacity to shape high-stakes digital legislation, particularly in areas of emerging technological complexity. | |

