Conference Agenda
Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
Please note that all times are shown in the time zone of the conference. The current conference time is: 13th May 2026, 06:56:56pm BST
|
Agenda Overview |
| Session | |
Digital Policy 01: EU Digital Policy and Democracy
| |
| Presentations | |
Digital Authoritarianism and the EU Policy Response: Regulatory Adaptation in the Age of External Digital Threats Diplomatic Academy of Vienna, Austria Digitalisation has indeed become a central domain of European security and governance, but it has also exposed the European Union to a growing range of external digital threats, including foreign information manipulation and interference (FIMI), cross-border surveillance practices, platform-based influence operations, and coercive data governance models promoted by authoritarian regimes. In response, the EU and its member states have begun to adjust regulatory and strategic frameworks - for example, through the Digital Services Act (DSA), the Digital Markets Act (DMA), GDPR enforcement practices, and the EU toolbox against hybrid threats - to defend digital infrastructure, protect information environments, and strengthen digital sovereignty. Yet we still lack systematic empirical mapping of how these policy responses evolve and how coherent they are across governance levels. This paper examines how and why EU-level and member-state digital policies are being recalibrated in response to digital authoritarianism and external digital security threats. It asks three questions: (1) How are external digital threats framed across EU and national policy documents, such as FIMI communications, cybersecurity strategies, and platform governance guidelines? (2) What types of regulatory and security responses emerge at EU and member-state levels, including platform content obligations, data localization debates, transparency requirements, and counter-disinformation measures? (3) To what extent do these responses converge toward a cohesive European digital security model? Methodologically, the paper combines digital policy analysis with a big data text approach. I build a large corpus of EU and national policy documents, regulatory proposals, parliamentary debates, and strategic communications related to digital governance and information security. Using large language model (LLM)–assisted classification and supervised text analysis, I map threat frames and policy instruments - for example references to foreign interference campaigns, platform risk mitigation duties, algorithmic transparency, and cross-border data transfer restrictions - across countries and over time. I argue that while the EU has developed an increasingly dense regulatory toolkit , including platform governance rules, data protection regimes, and disinformation response mechanisms such as the Code of Practice on Disinformation, policy responses remain uneven across member states and policy sectors. Regulatory cohesion depends not only on shared threat perceptions but also on alignment between external security instruments and internal market regulation. Strengthening EU digital resilience therefore requires more integrated threat classification, cross-level coordination, and clearer linkage between external digital threats and domestic regulatory enforcement. Generative AI and the Conditions of Democratic Accountability in the European Union University of Salford, Manchester, United Kingdom The rapid diffusion of generative artificial intelligence has intensified long-standing public law challenges relating to electoral integrity, political communication and democratic accountability. While digital campaigning and data-driven targeting are not new, AI systems amplify these practices through scale, speed, and opacity, raising distinctive normative and constitutional concerns regarding transparency, attribution and voter autonomy within multilevel democratic systems, particularly the European Union. This article examines how these challenges were addressed during the 2024 European Parliament elections, one of the first large-scale democratic exercises to take place in a context of widespread access to generative AI tools. It analyses, through doctrinal and policy approaches, the interaction between key elements of the European Union’s regulatory framework, including the Digital Services Act, the Artificial Intelligence Act (then partially operative) and the voluntary Code of Conduct on AI in Elections, and evaluates how these instruments were implemented and influenced practice during the campaign period. Building on prior work examining the implications of the EU experience for New Zealand electoral law, the article develops a broader comparative public law analysis grounded in theories of democratic legitimacy and the public sphere. It argues that AI-enabled campaigning strains core assumptions underlying electoral regulation, particularly those relating to shared political discourse, contestability and the traceability of responsibility. While the EU framework represents one of the most ambitious regulatory responses to these risks, its performance during the 2024 elections reveals both the potential and the limits of legal intervention in shaping technologically mediated political communication. The article concludes by drawing out general lessons for electoral democracies, emphasising the role of transparency, institutional design, and public law safeguards in sustaining meaningful democratic contestation in an era of AI-mediated political communication. From Data Protection to Algorithmic Governance: Platform Accountability in the EU’s Digital Regulatory Framework 1University of Vienna, Austria; 2University of Wroclaw, Poland Big technology companies increasingly exercise quasi-sovereign functions, from governing online speech to structuring digital markets and data infrastructures. In response to recurring public scandals, the European Union has positioned itself as a global regulatory leader. Yet it remains unclear whether this expanding body of digital law has meaningfully enhanced the accountability of online platforms — or merely reconfigured it. This article develops platform accountability as a critical analytical concept to examine how legal design redistributes power between platforms, Member States, and EU institutions. It asks: Does the EU’s evolving digital framework strengthen democratic control over platforms, or does it consolidate regulatory authority without ensuring effective accountability? The framework is applied to Digital Services regulation and the AI Act, as well as to elements of the Digital Omnibus package, combining case study analysis with selected enforcement practices at the EU and national levels. The analysis shows that the GDPR institutionalised rights-based accountability but exposed structural enforcement deficits rooted in decentralised supervision. The DSA addresses these weaknesses by centralising oversight of very large platforms and empowering the European Commission. The AI Act extends this logic through risk-based compliance obligations. However, the Digital Omnibus simultaneously recalibrates or dismantles certain procedural and institutional safeguards. The result is a shift toward technocratic centralisation that risks substituting democratic accountability with administrative control, leaving unresolved whether the Commission can function as a credible and constrained digital regulator. Disinformation, Platform Regulation and Democratic Legitimacy in the European Union: Digital Governance and the Portuguese Experience Fundação Minerva-Universidade Lusiada, Portugal Disinformation has become a structural feature of the European digital public sphere, challenging democratic governance not primarily through isolated falsehoods, but through the speed, scale and asymmetry with which misleading content circulates online. Empirical evidence demonstrates that false political information spreads significantly faster than verified content on social media platforms, a dynamic closely linked to algorithmic systems optimised for engagement rather than democratic quality. These developments have profound implications for political competition, public trust and the legitimacy of democratic decision-making across the European Union. Disinformation reshapes the conditions under which democratic preferences are formed, amplifies inequalities of visibility and influence, and weakens deliberative accountability. It exposes the limits of regulatory models originally designed for analogue or broadcast-based media environments. In response, the European Union has progressively reframed disinformation as a matter of systemic risk to democracy, moving from soft-law instruments—such as the 2018 Code of Practice on Disinformation and the European Democracy Action Plan - towards binding regulatory intervention through the Digital Services Act (Regulation (EU) 2022/2065). This paper critically analyses the Digital Services Act as a core instrument of EU digital policy, focusing on its risk-based governance model, obligations for very large online platforms, algorithmic transparency requirements and enhanced enforcement architecture. It argues that the DSA represents a significant attempt to recalibrate the relationship between private platform power and public democratic authority. At the same time, the paper highlights persistent tensions surrounding freedom of expression, regulatory capacity and uneven national implementation, which limit the transformative potential of EU digital regulation. The Portuguese experience is used as an illustrative case study. Portugal combines relatively low levels of political polarisation and high institutional trust in electoral administration with high social media penetration and strong exposure to transnational information flows. Publicly available data on digital media consumption, electoral participation and trust in democratic institutions reveal how even politically stable democracies remain vulnerable to algorithmically amplified disinformation originating beyond national borders. Portugal thus functions as a revealing implementation laboratory, demonstrating the asymmetries between EU-level regulatory ambition and national enforcement capacities. The paper concludes that addressing disinformation requires more than content moderation or fact-checking. Effective democratic protection in the digital age depends on integrated governance strategies that confront platform incentive structures, strengthen regulatory accountability and reinforce civic resilience. In this sense, EU digital policy must be understood not merely as market or technology regulation, but as a central component of contemporary democratic governance. | |

