Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 14th Aug 2025, 04:00:24am BST

 
 
Session Overview
Session
PSG 1 - e-Government_A
Time:
Thursday, 28/Aug/2025:
8:30am - 10:30am

Session Chair: Prof. C. William WEBSTER, University of Stirling

"AI Governance (book)"

Show help for 'Increase or decrease the abstract text size'
Presentations

AI governance: Indices, benchmarks and frameworks

Karl LÖFGREN, Andrew JACKSON

Victoria University of Wellington, New Zealand

The emerging technologies of generative artificial systems and platforms have been accom-panied by several international frameworks for assessing (among other things) the quality of regulation, maturity of national use of the technology and more broadly technical ad-vancement. In addition to the transnational guiding principles such as e.g. the OECD’s Global Partnership on AO (GPAI) (2024), the Hiroshima AI process (2023) and the variety of policies and guidelines from the European Union, we are witnessing a mushrooming in-dustry of maturity and readiness indexes, benchmarks, and list of best practices in which various jurisdictions are juxtaposed and ranked.

While it is impossible to identify a comprehensible and united voice behind these rankings, they operate as policy instruments in this so far that they generate norms and values about what is considered as ‘the norm’ and indirectly affect other policy instruments governing AI. Values such as efficiency, accountability, transparency, effectiveness and consistency are all being ingrained in the indices produced by various actors.

But these rankings also come with some weaknesses similar to those being addressed by ranking of electronic of digital government (Bannister, 2007). The first problem lies on the input side of performance and technological advances. This includes choice of indicators being dependent of access to data (‘convenience sampling’), large countries and microstates being juxtaposed (e.g., US and Singapore), the widespread use of desktop studies with little (if any) deeper understanding of the constitutional and cultural context, and finally quantita-tive measures counting the number of available online services (rather than the actual pub-lic use of them). Second, the widespread use of performance measure instruments has been heavily criticised for creating perverted and sometimes even dysfunctional effects through unintended consequences (Lewis, 2015). Among these, we find evidence of ‘gaming’ the systems while making sure that the output requirements are met while not really attaining the desired outcomes (Hood, 2002; Bevan and Hood, 2006). Instead of seeking better out-comes for the organisation, the performance regime incentives and encourages individuals to obtain higher scores either returning some benefits, or at least deflect them from sanc-tions (Ibid.). On an elevated (national) level the digital government benchmarks and indices seem to more play the role as ‘branding exercises’ for nations (showcasing ‘sophistication’) rather than producing better services to the citizens.

We will in this chapter present benefits and risks of benchmarking and rankings as it is por-trayed in the academic literature. It will furthermore how it affects generative AI. Finally, this will be complemented with some empirical examples.



Surveying the Future-Readiness of Governance Structures for Agentic AI in Public-Sector Organizations

Chris Schmitz1, Jonathan Rystrøm2, Jan Batzner3,4

1Centre for Digital Governance, Hertie School, Germany; 2Oxford Internet Institute, University of Oxford, UK; 3Weizenbaum Institute, Germany; 4Technical University of Munich, Germany

Artificial Intelligence (AI) systems promise enormous gains in efficiency, administrative capacity, and quality of service delivery for public-sector organizations (PSOs). Simultaneously, these organizations underlie particularly stringent process justice, transparency, and accountability requirements. The intransparency, disputed moral agency, and inherent risks of extant AI systems can threaten these requirements. "Agents" – AI systems which can work towards unspecified goals and integrate with existing tools with significant autonomy – pose particularly acute versions of these risks, but also proportionally great potential, due to their ubiquitous applicability. As with previous forms of digitalization PSOs seek to counter these risks with AI governance structures. In this work, we survey these governance structures and evaluate their suitability, efficacy, and scalability for the governance of current and future agentic AI systems.

We first provide an overview of existing governance structures for AI in PSOs, compiled using a mixed-methods approach consisting of a systematic literature review, practitioner interviews, and qualitative coding of state AI governance strategies. We identify three archetypes of governance structures. First, some PSOs integrate AI governance into their existing governance of digital processes, with responsibilities covered by existing units such as those handling cybersecurity, data protection, and digital innovation. Second, some PSOs create new units specifically for AI-related governance. These units are frequently integrated into AI projects in similar roles as existing governance and compliance units, but sometimes connected to more proactive teams, such as AI “centres of excellence”. Last, some PSOs create AI governance regimes that break the mold of typical bureaucratic structures, such as expert networks or interdisciplinary project teams. These archetypes enable and inhibit the adoption of AI systems to various degrees. Many PSOs combine elements from multiple archetypes.

By investigating the development trajectory of agentic AI, we posit that many existing governance structures will soon, or already do, adapt poorly to agent governance. Agentic AI systems are increasingly ubiquitous, closely integrated with existing tools, and autonomous across chains of tasks and domains. Resulting changes in governance requirements pose severe challenges for current structures: granular oversight requires tight integration of governance functions in operative units; continuous oversight cannot be provided efficiently in systems optimized for one-off approvals; expert-involved oversight cannot be provided by siloed governance teams.

We argue that these challenges, coupled with the risk-averse nature of PSOs, may unnecessarily inhibit AI uptake - even where the safety of this uptake could be guaranteed both technically and institutionally. In conservative PSOs, these structures may therefore contribute to a widening gulf between the capacity of states and that of non-state actors. Conversely, in more proactive PSOs individual departments may implement agents without proper oversight, threatening organizational coherence and safety. We conclude by identifying governance approaches that combine future-proofness and flexibility with the maintenance of organizational legitimacy, thereby providing a promising way forward for safe and productive use of agentic AI by PSOs.



How Does the Media View Artificial Intelligence: An Analysis of News Frames in Korean Media Editorials on AI

Suji KIM, Jisuk Woo

Seoul National University, Korea, Republic of (South Korea)

This study investigates media perspectives on artificial intelligence (AI) through a content analysis of Korean newspaper editorials, exploring how media defines problems, identifies causes, and suggests solutions for this emerging technology. As AI is in its early stages, it remains open to public deliberation and is shaped by public expectations. However, understanding technology's potential benefits and risks becomes increasingly complex.

News media plays a crucial role in shaping public perceptions and understanding of emerging technologies. Framing theory enables the identification of media frames that media outlets employ to interpret and present technological issues. Using an inductive approach with hierarchical cluster analysis of individual frame elements, the study systematically identifies five distinct AI frames in Korean media editorials:

1. Concern of falling behind in the global AI race and the role of the government: This frame highlights Korea's success in semiconductors and ICT while emphasizing the need to address regulatory constraints and create an environment for innovative research.

2. Highlighting the pros and cons of AI technological development: The frame balances admiration for AI's rapid development with concerns about potential threats to human values, stressing the importance of maintaining AI as a human companion.

3. Urging the establishment of legal frameworks to address AI's negative impacts: The frame highlights concern about AI's potential to undermine democratic processes, particularly through AI-generated disinformation. It advocates for aligning Korea's legal system with international legislative efforts while balancing AI regulation and promotion.

4. Emphasizing the need for economic support for AI development: Recognizing AI as a resource-intensive technology, the frame calls for government support to maintain global competitiveness, drawing on Korea's history of technological catch-up.

5. Criticism of the incompetence of politics and a call for legislation: The frame critiques the polarized political landscape that delays timely policy responses to AI challenges and opportunities.

The research reveals that political orientation significantly shapes media narratives surrounding AI. Conservative media predominantly emphasize economic and national security implications, while progressive media prioritize human-AI coexistence and ethical considerations. Notably, Frames 3 and 4 demonstrate remarkable convergence across political spectrums, suggesting a potential common ground in AI discourse despite ideological differences.

By identifying the media frames used to construct AI narratives across different media outlets, the research provides nuanced insights into the complex interactions among media discourse, public expectations, and technological policy formation. It offers critical perspectives on how media conceptualize AI and suggests practical implications for AI governance.