Algorithmic Gatekeeping: Welfare Automation and the Foundations of a Fair Administrative State
Sangh Rakshita, Sofia Ranchordas
Tilburg University, Netherlands,
Welfare governance has been rapidly digitalized and automated. The algorithmic infrastructures embedded in them (from biometric authentication-authorisation to automated eligibility) promise efficiency, scalability and fraud control. In doing so, they operate through the logics of exclusion and bias (Masiero, 2025). This reconfiguration of welfare access and use around compliance, traceability and legibility often is traded off for inclusion, fairness and equality.
In this paper, we outline a systematic two-fold shift in welfare automation. Firstly, from solidarity-based entitlements to conditionality-driven governance. Secondly, from rights-based to opaque and contingent access regimes. Together these shifts have adverse impacts on efficient delivery and, more worryingly, on the moral foundation of the welfare state itself, i.e., a commitment to social solidarity and equal inclusion. Through a comparative analysis of India’s Aadhaar-based welfare infrastructure and the UK’s Universal Credit, we highlight how algorithmic infrastructures institutionalise bias-by-design, embedding normative assumptions about deservingness and risk into welfare systems.
In India, Aadhaar failures have excluded elderly, disabled, and tribal communities from basic entitlements and, combined with profiling systems like Samagra Vedika, it has enabled further algorithmic exclusion from essential services. While Puttaswamy v Union of India (2018) upheld Aadhaar’s legality, it remains controversial for failing to effectively utilise equality and administrative law safeguards, particularly for socially marginalised groups. In the UK, Universal Credit’s automation has produced unlawful outcomes for non-standard earners (R (Johnson) v Secretary of State for Work and Pensions (2018)) and disproportionately harms women, lone parents, and disabled people. These failures reflect a regulatory transformation in how welfare is designed and delivered. While the digital welfare state remains trapped between soft AI ethics and under-enforced anti-discrimination regulation, algorithmic systems risk hardwiring exclusion and bias into governance, embedding it deeper into conditionality.
We use two intersecting legal regimes to analyse these harms. First, administrative law’s rule against bias, particularly the duty of impartiality. Second, the framework of discrimination law, which prohibits both indirect and direct discrimination unless justified by a legitimate, proportionate aim. This offers an innovative framing of the conditionality–solidarity tension in the digital welfare state by using the legal principles of the rule against bias and anti-discrimination law to interrogate the design of welfare algorithmic infrastructures. Such an approach reconceptualises welfare automation not as a neutral technical development but as a site where normative values are embedded—where the design of algorithmic systems (‘code as law’) can either entrench exclusion or be redirected towards solidarity and equal access.
In doing so, the paper draws on multidisciplinary perspectives from information systems, public administration, political science, regulatory studies, and law to develop an integrative framework for understanding algorithmic gatekeeping. This approach enables a richer account of how legal principles can guide the institutional design of welfare technologies.
Through this paper we aim to contribute a doctrinal and normative framework for recognising digital and automated welfare as a site of algorithmic conditionality, institutionalised exclusion, and legal displacement. And propose safeguards—impact assessments, appeal-by-design, and participatory oversight—to re-embed automation within solidarity, equality and fair administration.
The right to information regarding administrative decisions using AI system: constraints and positive side effects
Ana NEVES
Lisbon University, Portugal
The right to information held by public bodies is widely recognised in international and European legal instruments and, at the national level, in constitutions and legislative acts. In the context of administrative decision-making, this right protects individuals’ rights and interests and ensures their informed participation in the procedure.
Administrative decisions based on artificial intelligence (AI) systems raise concerns about transparency. However, it is also argued that the use of AI can enhance, rather than reduce, transparency ― particularly through the right to explanation or the duty to provide meaningful information about how such systems operate.
In practice, difficulties in accessing information have emerged in several cases involving the use of AI in administrative procedures and related review procedures, with adverse effects on the individuals concerned (e.g., the Netherlands SyRI case; the UK Post Office case; Consiglio di Stato, Judgment No. 2270/19; and Houston Federation of Teachers Local 2415 et al. v. Houston Independent School District).
One may argue that existing legal frameworks on the right to information are not sufficiently robust to ensure its effective exercise in procedures involving AI ― especially given that protection of the right is, in general, insufficient.
In this context, it is important to examine the subjective and objective scope of the right (i.e., to whom it applies and what its content encompasses), and to consider how its effectiveness can be guaranteed. It is also relevant to reflect on whether AI-specific legal frameworks can help address the practical and regulatory shortcomings of the right to information ― or even prompt a broader revision of such legal guarantees, with potentially positive side effects.
Artificial Intelligence and Administrative Bias in Public Administration Research: A Bibliometric Analysis
Nejc BREZOVAR, Lan UMEK, Dejan RAVŠELJ
Faculty of Public Administration, University of Ljubljana, Slovenia
This paper explores the relationship between artificial intelligence (AI) and administrative bias in wider public administration through a comprehensive bibliometric analysis of 1,062 academic publications indexed in the Scopus database (1990–2025). The study reveals a rapid increase in scholarly interest, particularly since 2017, reflecting growing concern about the ethical and legal implications of algorithmic administration decision-making. While the literature increasingly addresses themes such as algorithmic fairness, transparency, and human oversight, research on real-world applications within administrative procedures remains limited (but growing). The paper highlights the dual potential of AI to both reduce and amplify administrative bias in administrative decision-making. Findings underscore the need for safeguards to ensure accountability, fairness, and respect for fundamental human rights in the digitalization of public administration decision-making procesess.
Law and Bias Risks in Artificial Intelligence Readiness: Evidence from the Municipal Administration
Matej BABŠEK, Primož PEVCIN, Katja DEBELAK
Faculty of Public Administration, University of Ljubljana, Slovenia
Local governments play a key role in the delivery of services and the protection of individual rights, as they are the main interface between citizens and the public administration. With the increasing integration of artificial intelligence (AI) technologies into the work of local governments, the legal uncertainties related to their application, regarding bias and breach of impartiality and other procedural guarantees, are becoming more pressing. To mitigate these risks, legal frameworks such as the EU AI Act, the GDPR, etc., and national laws have been developed. However, the implementation of these laws at the local level remains unclear, particularly in smaller or resource-constrained local governments. This uncertainty, combined with a lack of AI readiness in local authorities, increases the risk of breaching fundamental principles of administrative (procedural) law. This problem becomes particularly critical when AI is applied in administrative procedures that directly affect the rights of individuals. The ability of local authorities to understand and effectively apply AI-related legal norms is critical to ensure compliance and maintain public trust in AI-driven processes.
This study explores the legal ambiguities associated with the use of AI in local government as a source of systematic risk for bias in administrative procedures. Using a case study of municipal administration of Log-Dragomer municipality in Slovenia, the study analyses how local officials understand, interpret and implement the legal obligations related to the adoption of AI in public administration. The analysis focuses on the application of EU and Slovenian legal norms and examines how these laws are implemented in AI-related administrative decisions. Hence, the research questions of this study are: (1) How do municipal officials interpret their legal responsibilities regarding the use of AI in administrative matters?; (2) How do local authorities assess and mitigate the risks of bias in administrative procedures, either as a product of AI systems or because of existing administrative practices, when AI is introduced?; and (3) What measures are taken at the local level to ensure compliance with the fundamental legal principle of impartiality in AI-driven decision-making processes?.
Through a combination of legal doctrinal analysis and empirical research, including a targeted survey of municipal employees and semi-structured interviews with municipal leaders, this study examines how the legal norms related to AI are understood and applied in practice. Particular attention is paid to the adequacy of the legal framework at local level, the perception of potential legal risks and the measures taken to ensure compliance with procedural safeguards. Finally, we investigate the level of awareness on the predictability of path and negative outcomes when risks happen to materialise. This research contributes to the discourse on the role of law in supporting digital transformation at the local level and aims to strengthen procedural fairness and ensure that the adoption of AI reinforces, rather than undermines, fundamental principles of good governance. Besides, the research contributes to the understanding of systematic risk associated with AI utilisation in local governments and identifies possible actions to mitigate those risks.
Conflicting and passive strategies contribute to the paralysis of the energy transition. A case study of the Dutch inland shipping sector
Nuria COMA-CROS1,2, Wouter Spekkink1, Ron van Duin2,3, Jurian Edelenbos1
1Erasmus University Rotterdam, The Netherlands; 2Center of Expertise HRTech, Rotterdam University of Applied Sciences, The Netherlands; 3Faculty of Technology, Policy & Management, Delft University of Technology, The Netherlands
Efforts towards energy transitions around the world are frequently stalled or delayed. We conceptualize this as transition paralysis, and understand it as a potential outcome of strategic interactions between actors involved in these transitions. While transition studies provide a strong foundation for understanding long-term transitions, they are less equipped to understand the strategic causes of paralysis. To address this gap, we supplement insights from transition studies with insights from literature on network governance and consumer behaviour. We bring these bodies of literature together in a framework that distinguishes between two types of paralysis: conflict paralysis (driven by conflicting interests and perceptions of problems and solutions) and choice paralysis (driven by weak expectations and high uncertainty). In addition, the role of power relations based on resource ownership is also included as a driver of actors’ strategies. We use the Dutch inland shipping sector as a critical case to examine transition paralysis within a complex network of actors and a landscape of potential competing fuel alternatives. These forms of paralysis are traced to the strategic interactions between actors and the drivers that shape actors’ conflicting and passive strategies. In doing so, the paper aims to advance a deeper understanding of transition paralysis.
|