Programme de la conférence

Vue d’ensemble et détails des sessions pour cette conférence. Veuillez sélectionner une date ou un lieu afin d’afficher uniquement les sessions correspondant à cette date ou à ce lieu. Cliquez sur une des sessions pour obtenir des détails sur celle-ci (avec résumés et téléchargement si disponibles).

Notez que tous les horaires indiqués se réfèrent au fuseau horaire de la conférence. L’heure actuelle de la conférence est : 16.08.2025 00:06:17 BST

 
 
Vue d’ensemble des sessions
Session
PSG 1 - e-Government_A
Heure:
Mercredi, 27.08.2025:
13:30 - 15:30

Président(e) de session : Pr C. William WEBSTER, University of Stirling

"AI values, ethics and frameworks"


Afficher l’aide pour « Augmenter ou réduire la taille du texte du résumé »
Présentations

The intended effects and perceived implications of the Fundamental Rights and Algorithm Impact Assessment: A governance tool for algorithmic systems in the Dutch public sector

Tynke SCHEPERS

Tilburg University, Netherlands, The

In the last couple of decades, algorithmic systems have become integral to government services, as they offer the potential to improve decision-making, increase efficiency, and personalize decisions for individual citizens (Levy et al., 2021). However, the adoption of these systems has not been without challenges, as evidenced by scandals such as the DUO fraud detection case in the Netherlands, which highlighted issues of transparency, accountability, and biased outcomes often associated with the use of algorithmic systems (Arrieta et al., 2020; Cath & Jansen, 2021). In response to these challenges, the Dutch government has designed and implemented various governance instruments, including the Fundamental Rights and Algorithm Impact Assessment (FRAIA), to ensure that algorithmic systems align with democratic values and protect fundamental rights.

The FRAIA is designed to assess the potential impacts of algorithmic systems on fundamental rights, providing a structured framework for evaluating and mitigating risks associated with automated decision-making. This tool reflects a broader governance philosophy that emphasizes transparency, accountability, and ethical considerations in the deployment of algorithmic systems (Gerards et al., 2022). The intended effects of the FRAIA are multifaceted. First, it seeks to modify the behaviour of public institutions and civil servants by encouraging them to adopt a rights-based approach to algorithmic governance. Second, the FRAIA aims to bring about systemic change by altering the institutional environment in which algorithmic systems operate. By integrating ethical considerations into the governance of these systems, the FRAIA reflects a commitment to justice, equity, and democratic governance. This approach is consistent with institutional theories that highlight the role of governance instruments in reshaping governance structures and capacities (Peters, 2018).

However, the effectiveness of the FRAIA is debated. Critics argue that such initiatives may prioritize performative compliance over substantive reforms, leading to superficial solutions that fail to address underlying systemic issues (Cath & Jansen, 2021). The focus on ethical guidelines may also inadvertently emphasize specific values while neglecting others, resulting in an incomplete governance approach. Additionally, the complexity of coordinating among diverse stakeholders poses challenges to effectively implementing the FRAIA (Le Galès, 2011).

This study explores the intended effects and perceived implications of the FRAIA through semi-structured narrative interviews with stakeholders involved in its creation and implementation, supplemented by a policy analysis focused on the creation and evaluation of the FRAIA. By examining their experiences and reflections, the study provides insights into the interplay between governance tools and institutional responses, contributing to a deeper understanding of governance instruments’ potential to increase accountability in the governance of algorithmic systems.

In conclusion, the FRAIA represents a significant effort to ensure responsible algorithmic governance in the Dutch public sector. Making impact assessments obligatory aims to promote transparency, accountability, and ethical considerations. However, its success depends on addressing challenges related to performative compliance and stakeholder coordination. Further research is needed to evaluate the FRAIA's long-term impacts and refine its implementation to achieve its intended effects.



Effects of AI Applications on Working Organisation in German Public Administrations

Maren Schuppan, Tino Schuppan

Stein-Hardenberg Institute, Germany

Understanding the effects of artificial intelligence (AI) in public administration requires a perspective that goes beyond technological functionality or policy design. This paper argues that the concept of working organization is essential for grasping how AI actually operates within administrative settings. Without this perspective, the underlying mechanisms and consequences of AI use remain insufficiently understood. The central question we address is: How does AI change the functioning of the machinery of government?

To explore this, the paper investigates how AI applications are transforming work organization in public administration, based on empirical case studies from two key sectors: employment services and local government. These fields offer distinct yet representative insights into how AI affects the structure, coordination, and execution of public sector work.

Our theoretical framework combines the concept of working organization - focusing on skills, task structures, and leadership - with a perspective on affordance theory. Affordance theory helps to explain how AI technologies enable certain actions, particularly in the context of complex administrative procedures. By integrating these perspectives, we examine how AI applications are embedded in routine administrative practices and how they shape new patterns of work.

Empirically, our findings reveal far-reaching changes in the organization of work. These include shifts in skill requirements, altered leadership, and in some cases, a significant devaluation of higher-skilled or knowledge-intensive tasks. Notably, common challenges seen in previous IT implementations—such as resistance to use or lack of user acceptance—appear to be less prevalent in the AI context. This may suggest a broader change in how digital innovations are received and institutionalized in the public sector.

Importantly, these developments indicate that AI is not simply accelerating existing trends in digital administration, but rather transforming the foundational logic of administrative functioning. Unlike earlier technologies that primarily served to automate routine processes, AI intervenes more deeply in decision-making and professional judgment—thereby reshaping the machinery of government itself.

In conclusion, the paper contributes to a deeper understanding of AI-driven change in public administration by foregrounding the organizational and human dimensions of digital transformation. It highlights the need for governance and management strategies that address not only technological capabilities, but also the evolving nature of administrative work. This perspective is vital for policymakers, scholars, and public managers aiming to engage with the realities of AI enabled transformation in government.



The Evolution and Shaping of Fundamental Rights in the AI Act: Power, Actors, and Framing

Sabrina Kamala KUTSCHER

Tilburg University, Netherlands, The

The rapid integration of AI and algorithms into public sector operations presents profound societal challenges beyond the technological sphere. These challenges are deeply intertwined with the evolving framing and protection of fundamental rights (FR), regulatory power dynamics, and the diverse actors shaping policies around these systems. Drawing on Bacchi’s (2009) framework on problem representation, this article explores the historical evolution of AI regulation in the EU, focusing on how the emergence of AI was framed as a problem in relation to FR in order to understand how FR were identified, interpreted, and prioritized.

By critically analyzing these regulatory developments, the article examines the reflexive nature of FR in AI, their interplay with product safety principles, and the actors driving this evolution. The central aim is to trace how EU discourse on AI governance evolved, identifying when AI came to be seen as requiring regulation, and which FR were foregrounded. Accordingly, this article asks: How were fundamental rights problematized under the AI Act and how did this problematization come about during the regulatory process leading up to the AI Act?

Methodologically, a comprehensive, critical discourse analysis of policy documents will be conducted. The EU AI Act serves as a starting point, emphasizing how its FR framing reflects a culmination of earlier regulatory debates and instruments. Using Bacchi’s approach, this paper deepens the understanding of how FR were shaped within problem perceptions and the need for regulation, while critically reflecting on how certain FR – such as privacy or transparency – have been elevated in regulatory discussions, whereas others may have been marginalized.



‘Healthy finances’ and ‘cost-efficient healthcare’: a qualitative case study of values and goal displacement in AI adoption processes

Jinke Mare Oostvogel

Leiden University, the Netherlands

Organizations adopt innovations, such as technologies, to improve performance and effectiveness in response to pressures (Damanpour, 1991). Recently, Artificial Intelligence (AI) has gained significant traction in the public sector as a performance-enhancing miracle (Mergel et al., 2023).

The public sector repeatedly faces a value trade-off between efficiency and efficacy (De Graaf & Van Der Wal, 2010; Young et al., 2022), often prioritizing efficiency over other public value goals. Thus reinforcing long-standing problems (Kempeneer & Heylen, 2023). This aligns with a public values framework focusing on market values and rationality (Nabatchi, 2018). However, such frameworks have limitations (Bozeman, 2002). In healthcare for instance, a gradual shift from a public service to a business model has undermined the resilience and robustness of the system (Enzmann, 2012; Gupta et al., 2019; McGregor, 2001; Mooney, 2012; Sikka et al., 2015).

Public values, representing normative qualities of the public realm, define what is good or desirable and legitimize actions and organizations in the public sector (Antonsen & Beck-Jørgensen, 1997; Beck-Jørgensen & Bozeman, 2007; Bozeman, 2007). These values influence decision-making (Hood, 1991). A negative implication for public values due to implementing AI may be the effect of goal displacement (Young et al., 2021). Goal displacement in public organizations can occur when they shift from the originally intended public values goals towards intermediate, more easily measurable goals (Young et al., 2021).

The impact of AI adoption on public values is particularly evident in healthcare, a high-stakes and value-laden sector (Bodenheimer & Sinsky, 2014; Reddy et al., 2020). Facing societal challenges, healthcare organizations increasingly adopt technologies as adaptive strategy (Turnhout, 2023; Zahlan et al., 2023). AI in particular, is seen as a way to mitigate significant challenges in healthcare (Braithwaite et al., 2018; Ministerie van Volksgezondheid, 2023; Vermeer, 2024).

This research addresses the following questions:

a) How do different values inform and result in the process of AI adoption;

b) and how does this lead to goal displacement?

We conducted a qualitative case study at the radiology department of a Dutch academic research hospital, currently adopting an AI tool developed by a large medical-technology corporation. The tool is software for Magnetic Resonance Imaging (MRI) machines, aimed at reducing scan times and improving image quality. The hospital’s explicit commitment to public values and its status as an academic research hospital make it an ideal setting to examine the full AI adoption process.

Data includes 55 days of ethnographic observations, 16 interviews, informal conversations, and document analysis. These data are abductively analyzed. The strength of this method lies in its engagement with participants in the field and data triangulation, enhancing the validity of the findings with empirical evidence and rich understanding (Eisenhardt, 1989; Stake, 2005).

This study intends to contribute to providing empirical evidence of goal displacement in AI adoption processes through the lens of public values. Moreover, this study pragmatically addresses (potential) public values failure in a healthcare setting. Lastly, this study argues for normative evaluations of public sector AI adoption processes and outcomes grounded in public values.



Cybersecurity in governing artificial intelligence

Katarzyna SIENKIEWICZ-MAŁYJUREK

Silesian University of Technology, Poland

Artificial intelligence (AI) technologies play a crucial role in digital transformation. Their rapid development in recent years has created numerous opportunities, leading to significant changes in service delivery processes across all sectors. AI is highly valued because it streamlines processes, enhances decision-making, predicts outcomes, saves time, and automates repetitive activities.

On the other hand, AI uses algorithms to draw conclusions logically based on large amounts of data, which can be sensitive, critical, or even unreliable. In such cases, it can lead to the risk of losing data and making incorrect decisions that may be biased or inconsistent with truth and social values. The perception of AI as a "black box" also complicates efforts to ensure transparency in its results and decision-making processes (von Zahn et al., 2025; Hamon et al., 2024). Furthermore, it remains unclear who is responsible for the consequences of incorrect AI decisions.

The importance and magnitude of the problems mentioned above have prompted a thorough search for solutions to mitigate them. In recent years, significant progress has been made in developing legal regulations, both at the national level and through international legislation (Lund et al., 2025; Hamon et al., 2024). One of the primary objectives of this effort is to establish a cybersecurity regime for artificial intelligence (AI) and to ensure that these technologies remain trustworthy.

However, the issue of AI cybersecurity remains in its early development stages (Hamon et al., 2024). Efforts to develop both national and international legal regulations on this topic have only recently begun. Ensuring AI security relies heavily on general cybersecurity practices that organisations have used until now. Additionally, AI technologies are frequently employed to enhance the cybersecurity of other systems (e.g. Sarker et al., 2021). As utilising these technologies is not entirely secure, they may introduce additional threats.

Research on cybersecurity regimes in governing AI is limited. Most existing studies are conceptual, focusing primarily on potential threats, the need for improved AI cybersecurity, and international regulations, especially those developed within the European Union. A notable deficiency of detailed findings and empirical studies in this area leads to establishing the aim of this paper, which is to seek answers to the following research questions:

1. How does the cybersecurity regime apply to AI governance?

2. What initiatives are being undertaken in AI cybersecurity at the local, national, and international levels?

3. How does cybersecurity affect AI-driven public service delivery?

The answers to the first two questions are derived from a systematic literature review using the Web of Science and Scopus databases. The findings from this review provide a foundation for conducting empirical research, which is essential for addressing the third research question. This empirical research is quantitative and aims to assess the strength and significance of the impact of cybersecurity on AI-driven public service delivery. The study was conducted among public managers in 300 form 1013 randomly selected cities in Poland in 2024. The findings enhance both the theory and practice of AI governance in the context of public management.



The Political Economy of AI

Victor BEKKERS

Erasmus University Rotterdam, Netherlands, The

If we want to understand what artificial intelligence (AI) implies for public administration, we can distinguish two perspectives. The first perspective focusses on the interplay between characteristics of AI and the course, content and effects of policy processes. The second perspective zooms out, focussing on the opportunity structure that AI provides. Who benefits? AI influences the access and distribution of power in society. This also affects the role of politics. I will address the latter question. I use of a political economy approach towards AI. It is emphasized that there are different actors in society who have divergent interests and unequal access to resources and power. A discussion about regulating AI should start with an more in-depth analysis of the power relations that lay behind the desire to regulate, thereby sketching the wider ‘power landscape’. In the literature an emerging interest in adopting a political economy perspective emerges. (Nayak & Walton, 2024; Trajtenberg, 2018; Kasy, 2023). In a political economy perspective several analytical questions are raised (Weingast et al, 2008; Bekkers & Moody, 2015). The first issue refers to the nature of power. AI is not only seen as set of technologies but is defined as a technological system. Characteristic for a systems approach to technology is that it consists of three interdependent elements: a. the technological artefact. b. activities and resources that are necessary to produce this artefact and c. the knowledge that is needed to produce and apply this artefact (Bijker et al, 1987; Hughes, 1987). What are the power resources that are related to applying AI? For instance in relation to AI not only specific AI knowledge is important but also the access to data and energy. The second question relates to understanding the power relations between relevant actors to exploit the power potential of these AI technologies, in terms of how these technologies are developed, exploited and distributed. How asymmetric are these relations? Nowadays we witness a concentration of power in the hands of ‘Big Tech’ with serious implications. The third question, understanding the power relations regarding the production and distribution of AI , give raise to political concerns, that e.g. addresses the access to these technologies, the democratic control over these technologies. Is politics able to address these issues in such away that the specific interests of AI companies can be balanced against the broader public values that relate to society as a whole other (in a binding way). Is politics able to a create a fair and proper balance? This is the last question. The latter becomes feasible if we look at the close relationship between the Trump administration and Big Tech. Who controls the governance over AI? What are the consequences of a democratic control over AI and what are relevant conditions? But also what are consequences of a more authoritarian control of AI and what are relevant conditions.