The intended effects and perceived implications of the Fundamental Rights and Algorithm Impact Assessment: A governance tool for algorithmic systems in the Dutch public sector
Tynke SCHEPERS
Tilburg University, Netherlands, The
In the last couple of decades, algorithmic systems have become integral to government services, as they offer the potential to improve decision-making, increase efficiency, and personalize decisions for individual citizens (Levy et al., 2021). However, the adoption of these systems has not been without challenges, as evidenced by scandals such as the DUO fraud detection case in the Netherlands, which highlighted issues of transparency, accountability, and biased outcomes often associated with the use of algorithmic systems (Arrieta et al., 2020; Cath & Jansen, 2021). In response to these challenges, the Dutch government has designed and implemented various governance instruments, including the Fundamental Rights and Algorithm Impact Assessment (FRAIA), to ensure that algorithmic systems align with democratic values and protect fundamental rights.
The FRAIA is designed to assess the potential impacts of algorithmic systems on fundamental rights, providing a structured framework for evaluating and mitigating risks associated with automated decision-making. This tool reflects a broader governance philosophy that emphasizes transparency, accountability, and ethical considerations in the deployment of algorithmic systems (Gerards et al., 2022). The intended effects of the FRAIA are multifaceted. First, it seeks to modify the behaviour of public institutions and civil servants by encouraging them to adopt a rights-based approach to algorithmic governance. Second, the FRAIA aims to bring about systemic change by altering the institutional environment in which algorithmic systems operate. By integrating ethical considerations into the governance of these systems, the FRAIA reflects a commitment to justice, equity, and democratic governance. This approach is consistent with institutional theories that highlight the role of governance instruments in reshaping governance structures and capacities (Peters, 2018).
However, the effectiveness of the FRAIA is debated. Critics argue that such initiatives may prioritize performative compliance over substantive reforms, leading to superficial solutions that fail to address underlying systemic issues (Cath & Jansen, 2021). The focus on ethical guidelines may also inadvertently emphasize specific values while neglecting others, resulting in an incomplete governance approach. Additionally, the complexity of coordinating among diverse stakeholders poses challenges to effectively implementing the FRAIA (Le Galès, 2011).
This study explores the intended effects and perceived implications of the FRAIA through semi-structured narrative interviews with stakeholders involved in its creation and implementation, supplemented by a policy analysis focused on the creation and evaluation of the FRAIA. By examining their experiences and reflections, the study provides insights into the interplay between governance tools and institutional responses, contributing to a deeper understanding of governance instruments’ potential to increase accountability in the governance of algorithmic systems.
In conclusion, the FRAIA represents a significant effort to ensure responsible algorithmic governance in the Dutch public sector. Making impact assessments obligatory aims to promote transparency, accountability, and ethical considerations. However, its success depends on addressing challenges related to performative compliance and stakeholder coordination. Further research is needed to evaluate the FRAIA's long-term impacts and refine its implementation to achieve its intended effects.
Effects of AI Applications on Working Organisation in German Public Administrations
Maren Schuppan, Tino Schuppan
Stein-Hardenberg Institute, Germany
Understanding the effects of artificial intelligence (AI) in public administration requires a perspective that goes beyond technological functionality or policy design. This paper argues that the concept of working organization is essential for grasping how AI actually operates within administrative settings. Without this perspective, the underlying mechanisms and consequences of AI use remain insufficiently understood. The central question we address is: How does AI change the functioning of the machinery of government?
To explore this, the paper investigates how AI applications are transforming work organization in public administration, based on empirical case studies from two key sectors: employment services and local government. These fields offer distinct yet representative insights into how AI affects the structure, coordination, and execution of public sector work.
Our theoretical framework combines the concept of working organization - focusing on skills, task structures, and leadership - with a perspective on affordance theory. Affordance theory helps to explain how AI technologies enable certain actions, particularly in the context of complex administrative procedures. By integrating these perspectives, we examine how AI applications are embedded in routine administrative practices and how they shape new patterns of work.
Empirically, our findings reveal far-reaching changes in the organization of work. These include shifts in skill requirements, altered leadership, and in some cases, a significant devaluation of higher-skilled or knowledge-intensive tasks. Notably, common challenges seen in previous IT implementations—such as resistance to use or lack of user acceptance—appear to be less prevalent in the AI context. This may suggest a broader change in how digital innovations are received and institutionalized in the public sector.
Importantly, these developments indicate that AI is not simply accelerating existing trends in digital administration, but rather transforming the foundational logic of administrative functioning. Unlike earlier technologies that primarily served to automate routine processes, AI intervenes more deeply in decision-making and professional judgment—thereby reshaping the machinery of government itself.
In conclusion, the paper contributes to a deeper understanding of AI-driven change in public administration by foregrounding the organizational and human dimensions of digital transformation. It highlights the need for governance and management strategies that address not only technological capabilities, but also the evolving nature of administrative work. This perspective is vital for policymakers, scholars, and public managers aiming to engage with the realities of AI enabled transformation in government.
‘Healthy finances’ and ‘cost-efficient healthcare’: a qualitative case study of values and goal displacement in AI adoption processes
Jinke Mare Oostvogel
Leiden University, the Netherlands
Organizations adopt innovations, such as technologies, to improve performance and effectiveness in response to pressures (Damanpour, 1991). Recently, Artificial Intelligence (AI) has gained significant traction in the public sector as a performance-enhancing miracle (Mergel et al., 2023).
The public sector repeatedly faces a value trade-off between efficiency and efficacy (De Graaf & Van Der Wal, 2010; Young et al., 2022), often prioritizing efficiency over other public value goals. Thus reinforcing long-standing problems (Kempeneer & Heylen, 2023). This aligns with a public values framework focusing on market values and rationality (Nabatchi, 2018). However, such frameworks have limitations (Bozeman, 2002). In healthcare for instance, a gradual shift from a public service to a business model has undermined the resilience and robustness of the system (Enzmann, 2012; Gupta et al., 2019; McGregor, 2001; Mooney, 2012; Sikka et al., 2015).
Public values, representing normative qualities of the public realm, define what is good or desirable and legitimize actions and organizations in the public sector (Antonsen & Beck-Jørgensen, 1997; Beck-Jørgensen & Bozeman, 2007; Bozeman, 2007). These values influence decision-making (Hood, 1991). A negative implication for public values due to implementing AI may be the effect of goal displacement (Young et al., 2021). Goal displacement in public organizations can occur when they shift from the originally intended public values goals towards intermediate, more easily measurable goals (Young et al., 2021).
The impact of AI adoption on public values is particularly evident in healthcare, a high-stakes and value-laden sector (Bodenheimer & Sinsky, 2014; Reddy et al., 2020). Facing societal challenges, healthcare organizations increasingly adopt technologies as adaptive strategy (Turnhout, 2023; Zahlan et al., 2023). AI in particular, is seen as a way to mitigate significant challenges in healthcare (Braithwaite et al., 2018; Ministerie van Volksgezondheid, 2023; Vermeer, 2024).
This research addresses the following questions:
a) How do different values inform and result in the process of AI adoption;
b) and how does this lead to goal displacement?
We conducted a qualitative case study at the radiology department of a Dutch academic research hospital, currently adopting an AI tool developed by a large medical-technology corporation. The tool is software for Magnetic Resonance Imaging (MRI) machines, aimed at reducing scan times and improving image quality. The hospital’s explicit commitment to public values and its status as an academic research hospital make it an ideal setting to examine the full AI adoption process.
Data includes 55 days of ethnographic observations, 16 interviews, informal conversations, and document analysis. These data are abductively analyzed. The strength of this method lies in its engagement with participants in the field and data triangulation, enhancing the validity of the findings with empirical evidence and rich understanding (Eisenhardt, 1989; Stake, 2005).
This study intends to contribute to providing empirical evidence of goal displacement in AI adoption processes through the lens of public values. Moreover, this study pragmatically addresses (potential) public values failure in a healthcare setting. Lastly, this study argues for normative evaluations of public sector AI adoption processes and outcomes grounded in public values.
Cybersecurity in governing artificial intelligence
Katarzyna SIENKIEWICZ-MAŁYJUREK
Silesian University of Technology, Poland
Artificial intelligence (AI) technologies play a crucial role in digital transformation. Their rapid development in recent years has created numerous opportunities, leading to significant changes in service delivery processes across all sectors. AI is highly valued because it streamlines processes, enhances decision-making, predicts outcomes, saves time, and automates repetitive activities.
On the other hand, AI uses algorithms to draw conclusions logically based on large amounts of data, which can be sensitive, critical, or even unreliable. In such cases, it can lead to the risk of losing data and making incorrect decisions that may be biased or inconsistent with truth and social values. The perception of AI as a "black box" also complicates efforts to ensure transparency in its results and decision-making processes (von Zahn et al., 2025; Hamon et al., 2024). Furthermore, it remains unclear who is responsible for the consequences of incorrect AI decisions.
The importance and magnitude of the problems mentioned above have prompted a thorough search for solutions to mitigate them. In recent years, significant progress has been made in developing legal regulations, both at the national level and through international legislation (Lund et al., 2025; Hamon et al., 2024). One of the primary objectives of this effort is to establish a cybersecurity regime for artificial intelligence (AI) and to ensure that these technologies remain trustworthy.
However, the issue of AI cybersecurity remains in its early development stages (Hamon et al., 2024). Efforts to develop both national and international legal regulations on this topic have only recently begun. Ensuring AI security relies heavily on general cybersecurity practices that organisations have used until now. Additionally, AI technologies are frequently employed to enhance the cybersecurity of other systems (e.g. Sarker et al., 2021). As utilising these technologies is not entirely secure, they may introduce additional threats.
Research on cybersecurity regimes in governing AI is limited. Most existing studies are conceptual, focusing primarily on potential threats, the need for improved AI cybersecurity, and international regulations, especially those developed within the European Union. A notable deficiency of detailed findings and empirical studies in this area leads to establishing the aim of this paper, which is to seek answers to the following research questions:
1. How does the cybersecurity regime apply to AI governance?
2. What initiatives are being undertaken in AI cybersecurity at the local, national, and international levels?
3. How does cybersecurity affect AI-driven public service delivery?
The answers to the first two questions are derived from a systematic literature review using the Web of Science and Scopus databases. The findings from this review provide a foundation for conducting empirical research, which is essential for addressing the third research question. This empirical research is quantitative and aims to assess the strength and significance of the impact of cybersecurity on AI-driven public service delivery. The study was conducted among public managers in 300 form 1013 randomly selected cities in Poland in 2024. The findings enhance both the theory and practice of AI governance in the context of public management.
|