Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 13th May 2026, 06:58:35pm BST
|
Agenda Overview |
| Session | |
Virtual Panel 301: EU Digital Policy:
| |
| Presentations | |
Protecting Democracy or Expanding Control? Digital Sovereignty and the Risk to Freedom of Expression in EU Disinformation Governance University of Helsinki, Finland The rapid expansion of EU and national policies targeting online disinformation has raised concerns about their implications for freedom of expression. While regulatory efforts are often framed as protecting democracy and information integrity, they also reveal competing visions of who should control the digital sphere. Sovereignty discourses that frame disinformation as an external threat become particularly visible in Member States pursuing assertive digital regulation. This paper examines how the emerging discourse of digital sovereignty intersects with the regulation of disinformation in Europe, linking national-level regulatory patterns to freedom of expression risks, and discusses the broader implications for EU digital governance. Drawing on a comparative analysis of 50 counter-disinformation measures adopted between 2010 and 2024 across twelve Western democracies, the study applies a composite Risk to Freedom of Expression (RFE) score that evaluates three dimensions of disinformation policy: 1) definitional clarity; 2) centralization of authority; and 3) regulatory strictness. This study uses the RFE score to identify comparative national cases with contrasting RFE scores. Countries like the Netherlands exhibit low RFE scores due to reliance on co-regulatory and educational approaches. In contrast, Germany adopts more a centralized and punitive model that delegate significant content control to digital platforms, raising the risk of over-removal and self-censorship. Building on these findings, the paper connects regulatory patterns to the EU’s shifting digital agenda, which increasingly invokes digital sovereignty as a justification for regulatory intervention. Through a focused discursive review of two illustrative national cases, Germany and the Netherlands, this study explores how digital sovereignty narratives shape regulatory rationales and inform national approaches to speech governance. This study suggests that where digital sovereignty is invoked as a matter of national control or technological independence, disinformation regulation tends to emphasize centralized oversight and sanctioning mechanisms. By contrast, where it is framed as democratic resilience or citizen empowerment, regulatory approaches align more closely with international human rights standards, emphasizing transparency and pluralism. These findings point to a deeper tension within EU digital governance between protecting the information space and expanding regulatory control over it. By tracing how EU-level sovereignty narratives permeate disinformation policy, the paper contributes to understanding institutional change in EU governance, explicating how the boundaries between security, technology, and fundamental rights are being renegotiated. It argues that as the EU strengthens its digital sovereignty agenda, attention must be paid to how these discourses reconfigure the balance between freedom and control within European democracies. Experimental Governance through Regulatory Sandboxes: Lessons for EU Artificial Intelligence Regulation Hebrew University, Israel The European Union has positioned itself at the forefront of global artificial intelligence (AI) regulation through the adoption of the Artificial Intelligence Act (AI Act). Yet the Act also exposes a central regulatory dilemma: how to govern rapidly evolving, high-risk AI systems under conditions of uncertainty without stifling innovation. This paper examines regulatory sandboxes as an emerging instrument of experimental governance within the EU, assessing their capacity to reconcile innovation, accountability, and fundamental rights. Drawing on the concept of “regulation through learning,” the paper situates AI sandboxes within the broader architecture of EU regulatory governance, alongside conformity assessments, standardisation, and delegated enforcement. It traces the diffusion of the sandbox model from its origins in UK fintech regulation to its incorporation into EU law and national-level initiatives. Using a comparative qualitative methodology, the analysis examines sandbox frameworks in the United Kingdom, Italy, and Israel, with particular attention to their interaction with EU legal norms, institutional design, and mechanisms for knowledge transfer into binding regulation. The paper argues that regulatory sandboxes function as intermediating institutions that translate abstract EU legal principles such as risk-based regulation, transparency, and non-discrimination into operational practices. However, their effectiveness depends on three factors: institutional capacity, independence from regulated actors, and integration with the EU’s broader compliance ecosystem. Where these conditions are absent, sandboxes risk regulatory capture or fragmentation rather than adaptive governance. By analysing regulatory sandboxes as socio-technical infrastructures rather than merely procedural tools, the paper contributes to debates on experimental governance, EU regulatory legitimacy, and the future of risk regulation in the digital single market. It concludes that sandboxes can enhance the EU’s regulatory responsiveness, but only if embedded within robust accountability and rights-based frameworks. Discursive Power and Policy Entrepreneurship in EU AI Regulation 1University of Coimbra, Portugal; 2University of Minho, Portugal; 3CICP, Portugal In 2018, the European Union (EU) launched a coordinated strategy on artificial intelligence (AI), aiming to foster innovation while safeguarding fundamental rights and societal values. Building on the frameworks of policy entrepreneurship (Kingdon, 2003) and discursive institutionalism (Schmidt, 2008, 2010, 2015), this paper explores how the European Commission and the European Parliament contributed to shaping the ideas that underpin the AI Act. The analysis is based on official documents, including communications, resolutions, and speeches, produced between 2018, when the EU introduced its first AI Strategy, and 2024, when the AI Act was formally adopted. Methodologically, the study applies a qualitative thematic analysis of discourse (Braun & Clarke, 2006; Coffey, 2014) to identify patterns of argumentation and framing. It examines how each institution conceptualised the policy challenge and articulated potential solutions, navigating the tension between Europe’s ambition to lead globally in AI (market-oriented framing) and its commitment to ethical, human-centric principles (values-oriented framing). By systematically comparing the Commission and the European Parliament, the paper sheds light on the interplay between institutional roles, discursive strategies, and normative commitments in EU law-making. The central research question guiding this inquiry is: How did the ideas promoted by the Commission and the European Parliament influence the final formulation of the Artificial Intelligence Act? | |

