Conference Agenda
Session | |
PSG 15 - Public Administration, Technology and Innovation (PATI)
"AI adoption and public value" | |
Presentations | |
Adopting AI in public organizations and its impact on public values: A systematic review and conceptual framework 1Reykjavík University, Iceland; 2Tampere University, Finland Background/context of the review: The adoption of artificial intelligence (AI) in the public sector is rapidly increasing, driven by its potential to enhance service delivery (including quality and equity), organizational effectiveness (better use of resources), and efficiency in government operations (economic rationality). AI systems, defined by the OECD as systems that make predictions, recommendations, or decisions based on data, are being implemented through modernization programs to achieve AI's potential across governance functions (service delivery, internal management of organizations, and policymaking), as demonstrated in national AI strategies. These initiatives pose significant implications for public organizations, disrupting public values that they promise to achieve and protect. Research aims: The objective of this review paper is to systematically analyze the public values impacts of AI applications in the public sector. Specifically, it aims to understand how public organizations realize value from their AI investments and identify the challenges associated with creating public value through AI. The study seeks to contribute to the existing literature by providing a comprehensive framework for evaluating the public value impacts of AI in government settings. Research approach:The study employs a systematic literature review (SLR) methodology, utilizing the Web of Science portal to identify relevant articles. A topic search query with specific keywords and filters was applied (e.g., "AI" AND "public sector" AND "value creation" OR "value co-creation", etc.), resulting in 72 articles, of which 35 were selected based on predefined criteria (e.g., must focus on the public sector/public value). The selected articles were subjected to qualitative content analysis, using a coding frame adapting a PVM in digital transformation framework (Karunasena et al., 2011). This approach focuses on dimensions related to service delivery, organizational effectiveness, and trust, allowing for an flexible approach where additional indicators emerge from the analysis process. Results from the literature review: The content analysis revealed several major findings, including: >AI-enabled services: AI bots can positively influence value creation through perceived usefulness and enjoyment. AI enables value co-creation with citizens and improves public services in local government settings. > Effectiveness of public organizations: AI applications support accurate decision-making and provide high-quality public services. Big data helps governments understand citizens' needs, leading to improved management of resources. > Development of trust: AI impacts procedural justice and trust in government. Research suggests that citizens perceive rule-based AI systems are generally fairer and more acceptable than data-driven systems. Challenges were also identified, including the need for transparent and explainable algorithmic decision-making to avoid public value failure and the significant role (and power) of corporate technology companies in shaping AI policy. Contribution to digital governance literature: This review paper contributes to research by developing an evaluation framework indicative of creating public value through AI, extending existing frameworks to include AI-specific indicators. It highlights the benefits and challenges of AI in the public sector, emphasizing the need for ethical considerations and transparent governance. The findings provide a foundation for future research, encouraging close cooperation among universities, industry, and government to address the ethical and legal uncertainties surrounding public sector AI applications. Artificial Intelligence Readiness in Japanese Local Governments: Analysing obstacles and opportunities from survey data Chuo University, Japan The emergence of artificial intelligence (AI) within public administration is anticipated to be a catalyst for significant improvements in operational efficiency and service quality, especially at the local level. Specifically, local government can leverage AI to identify trends, forecast outcomes, and make evidence-based decisions, resulting in more efficient functioning and enhanced responsiveness to citizen needs. However, successful AI integration requires comprehensive organisational readiness, with public managers fully aware of all possible internal and external constraints that may affect AI readiness. This research tries to understand the successes and challenges faced by early adopters of AI and underscore the critical role of AI in local governments. The research, thus, will focus on identifying the determinants that enable or constrain the AI readiness of local governments based on the Technology-Organization-Environment (TOE) framework. Data will be obtained from validated questionnaires distributed to public managers in Japanese local governments. The complex relationships between the determinants (e.g., direct benefits, financial costs, organisational innovativeness, government pressure, citizen pressure, and government incentives) of AI readiness and its components (e.g., technical skills, business skills, data, technology, and basic resources) will be examined using statistical techniques such as regression analysis, factor analysis, and structural equation modelling. The paper explores the first set of results from the two countries’ comparative research between Japan and Slovenia, but focusing mostly on the Japanese survey data. The research is supported by JSPS-MESS bilateral research funding (research ID: JPJSBP 120255004; 2025-2028). Since the current research, which has just started, would not produce significant results by the time of the conference, the paper will be mostly based on the survey results of the previous comparative research (funded by JSPS-MESS bilateral research funding, research ID JPJSBP 120205004; 2020-2023, titled “Public administration models and principles: Slovenia and Japan in a comparative perspective”) to understand the tendency of AI readiness in local governments from the surveyed data on digitalization-related questions. Indeed, in the previous survey, the public managers assessed digital era governance elements as more prominent on both countries’ local than state levels. The results confirmed that the activities of e-government are not progressing in Japan. The Digital Agency was established in September 2021 at the national level, while in the past, each ministry, agency, and local government has been promoting digitalisation separately and the Covid-19 pandemic highlighted such practice as ineffective, according to the Digital Agency (2021). However, digitalisation in Japan is lagging at the national and local levels. In the survey, more than 70% of national public employees disagree with the following statement: “Advances in information technology have greatly improved your workloads”. It may be said that digitalisation is progressing, but not in a user-friendly manner in Japan. In conclusion, the paper tries to address the obstacles, issues, and opportunities identified through the survey to understand the trends in AI readiness in local governments. Accountability as Herding AI Cats: The Contingent Role of Decision Outcomes in Algorithmic Bureaucracy's Legitimacy Nanjing University, China, People's Republic of The global rise of artificial intelligence (AI) in public decision-making is fundamentally reshaping state legitimacy by transferring administrative discretion from political bureaucracies to algorithmic bureaucracies. Literature highlights the need to reconcile AI's accountability-legitimacy tensions, with challenges often likened to "herding cats" due to complexity and unpredictability. Studies show algorithmic bureaucracy exacerbates "forum drifting" (eroding citizen engagement), while citizens prioritize effectiveness over transparency, creating a paradox where governments invest heavily in AI accountability despite high costs and weak public demand. By adapting the theory of bureaucratic legitimacy to the context of algorithmic bureaucracy, we explore how AI accountability and AI effectiveness, both independently and collectively, shape citizens’ perceptions of the legitimacy of algorithmic bureaucracy. Drawing on the theory of forum drifting and procedural fairness theory, we suspect that citizens’ evaluations on algorithmic legitimacy maybe contingent on whether AI decisions can bring gains or lead loss to them. Taken together, we seek to answer the following research question: With acknowledging the decision outcomes on citizens, to whether and what extent do AI accountability arrangements affects citizens’ legitimacy perceptions of AI usage? Based the Schillemans and Busuioc’s (2014), Brummel and de Blok’s (2024), König et al. (2024)’s finding, we propose: H1: The decision-making procedures using AI with high effectiveness leads to higher legitimacy perceptions compared to those with low effectiveness. H2: AI decision-making procedures with explicit accountability arrangements for compliance (H2.1) (or for results (H2.2)) lead to higher legitimacy perceptions compared to those without any accountability arrangements. H3: The accountability arrangements are more likely to enhance legitimacy perceptions when AI decisions lead to negative rather than positive outcomes. H4: AI effectiveness is more likely to enhance legitimacy perceptions when AI decisions lead to positive rather than negative outcomes. H5: The accountability arrangements and effectiveness function as complementary factors in enhancing AI legitimacy across both positive and negative outcome scenarios. We conducted three factorial survey experiments with 1,539 Chinese citizens simulating housing welfare decisions: a 2 × 2 design (Effectiveness * Outcome), a 3 × 2 design (Accountability * Outcome) and a combined (3 × 2 + 3 × 2 × 2) design (Accountability * Effectiveness +Accountability * Effectiveness * Outcome). We manipulated three key factors: (1) accountability type (no accountability, accountability for compliance, and accountability for results), (2) decision outcome (negative vs. positive outcomes), and (3) effectiveness level (high vs. low). This experimental design enables us to examine whether and how different accountability arrangement and effectiveness levels influence legitimacy perceptions (operationalized as fairness, acceptance, and trust). The Findings, particularly with the importance of context in evaluating the interplay between AI effectiveness and accountability, indicate that: (1) AI effectiveness consistently enhances legitimacy perceptions, with stronger effects in positive decision contexts; (2) AI accountability significantly boosts legitimacy only in negative decision scenarios; (3) Interaction effects demonstrate a substitution dynamic—combining effectiveness and accountability reduces marginal legitimacy gains; (4) Contextual divergence: Combined implementation maintains baseline legitimacy without additive effects in the context of favorable outcomes . |