Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 14th Aug 2025, 04:00:23am BST

 
 
Session Overview
Session
PSG 22 - Behavioural Public Administration
Time:
Wednesday, 27/Aug/2025:
8:30am - 10:30am

Session Chair: Dr. Sheeling NEO, American University

Show help for 'Increase or decrease the abstract text size'
Presentations

The Interplay of Representation and Artificial Intelligence in Healthcare: Implications for Citizens’ Trust

Tanachia Ashikali, Petra van den Bekerom, Nadine Raaphorst, Martin Sievert, Andrei Poama

Leiden University, The Netherlands

This study examines how the interplay between bureaucratic representation—specifically skin color congruence between patients and providers—and AI-driven decision-making affects citizens’ trust in healthcare services.

This focus is relevant for several reasons: Most importantly, the underrepresentation of marginalized groups among healthcare professionals and in diagnostic tools has been linked to disparities in diagnosis and treatment (Groh et al., 2024). While representative bureaucracy theory suggests that social identity congruence can mitigate such disparities, the integration of AI into healthcare introduces new complexities. AI may reduce human bias and promote equitable outcomes, yet concerns remain that systems trained on non-representative data may reinforce existing inequities (Compton et al., 2023). As such, trust in healthcare is a critical outcome because it influences patients’ willingness to seek care and follow medical advice (Mechanic & Schlesinger, 1996).

We combine these perspectives by moving beyond simply testing whether AI use influences trust. We also examine the role of representation, both human and algorithmic. Specifically, we investigate whether AI use reinforces or undermines the symbolic effects of passive representation. Our design focuses on an AI-driven skin cancer detection app, a tool exemplifying the growing role of AI in clinical practice. In the coming months, we will conduct a randomized survey experiment simulating a doctor–patient consultation, involving several treatments:

• Skin color of the general practitioner (light, medium, or dark);

• AI diagnostic tool (use vs. no use);

• Representativeness of the AI’s training data (diverse vs. biased).

We test whether GP–patient skin color congruence increases trust (H1), whether AI use improves trust (H2), and whether representativeness increases trust among patients with darker skin, both directly (H3a) and by strengthening the effect of AI (H3b). Overall, our study offers strong potential to extend representative bureaucracy theory into the digital realm, showing how human and algorithmic representation shape trust in public services.



A replication of “Talk or type? The effect of digital interfaces on citizens’ satisfaction with standardized public services”

Peiyi WU

Beihang University, China, People's Republic of

Digital interfaces are considered a trend in the service delivery of smart government. To better understand how digital interfaces affect citizen satisfaction, emerging public administration research has provided evidence using experimental methods. However, the effect of the digital interface on citizens’ satisfaction across national cultures is overlooked in current research. Prokop and Tepe (2021) conducted an experiment in the German context and found that a digital interface, compared with face-to-face communication, has no effect on citizens’ satisfaction. Based on the experimental results of Prokop and Tepe, we optimise and put forward three hypotheses:

H1: Satisfaction in public service encounters with digital interfaces is similar to face-to-face encounters.

H2: Applying for potentially stigmatizing public services harms citizens' satisfaction, but the effect is relatively weak.

H3a: Service failure has a negative effect on citizens' satisfaction.H3b: Explaining and apologizing for service failure increases satisfaction compared to service failure without explanation and apology.

This article conducts a narrow experimental replication of Prokop and Tepe (2021, Public Administration, 100(2), 427-443) in the Asian city of Shenzhen in China. The vignette reveals a 4*3*3 design that leads to 36 permutations. The first dimension is the public service interface, which includes three levels: face-to-face, self-service terminal (SST), and app. The second dimension is the type of public service, including four types: ID card, certificate of no criminal conviction, recordation receipt for public rental housing, and minimum living guarantee card. The third dimension is the quality of public service, including three attributes: no failure, failure without explanation, and failure with explanation. Each participant receives three randomly generated vignettes.

For the result, we found that satisfaction in public service encounters with digital interfaces is similar to face-to-face encounters, which is similar to the original study. And we found there are no significant differences in satisfaction across different service types (ID card, certificate of no criminal conviction, and recordation receipt for public rental housing, which is similar to the 3 service types in the original study). In the replication study, we add a minimum living guarantee (dibao) card as a higher degree of stigmatization in China. That’s because our search of existing stigmatization studies in Chinese academic journals focuses on grassroots cadres (Ren et.al 2024) and government departments such as urban management (Xie and Tian 2013), and also some studies explore the stigmatization of minimum living guarantee card which is one of the basic public services in China (Chen and Yue 2023). But the stigmatization effect of the minimum living guarantee is also mixed in China. Therefore, the difference in stigmatization would be small and relatively weak. Due to differences in political systems and cultures, citizens in Shenzhen have higher trust in the government. This would reduce the impact of service failures and recovery on satisfaction, and, therefore, we found the original hypotheses by expecting the size effects to be lower than in the original experiment.



Trust Repair in Human-Human vs. Human-AI Teams in Public Administration: An Experimental Approach

Tessa HAESEVOETS

Ghent University, Belgium

Trust is fundamental not only in human-human relationships but also in human-AI interactions. As AI systems become increasingly integrated into public administration, understanding trust dynamics in human-AI teams is essential. However, AI systems are not infallible, and trust violations can undermine their acceptance and legitimacy in government settings. Unlike humans, AI lacks agency and emotional intelligence, raising questions about whether traditional trust repair mechanisms – such as apologies – are equally effective in human-AI teams. The present study contributes to the growing field of Behavioral Public Administration (BPA) by applying an experimental approach to examine whether apologies restore trust as effectively in human-AI relationships as in human-human relationships within public organizations; a topic that remains largely unexplored. Given AI’s limited ability to convey sincerity or moral responsibility, traditional trust repair strategies may be less effective in human-AI interactions. If this proves to be the case, AI failures could have cumulative negative consequences, ultimately leading to widespread reluctance or outright rejection of AI technologies in public governance. To test this, an experimental study will be conducted in which respondents (i.e., civil servants) are placed in a simulated public organization and randomly assigned to either a human or an AI teammate. In reality, however, these teammates and their behaviors will be pre-programmed. During the experiment, this teammate (human or AI) will commit a trust violation (e.g., making a mistake while processing a subsidy application), followed by either an apology or no apology. Trust will be measured at three different moments (using both scale and behavioral measures): before the transgression, after the transgression, and after the repair action. The findings of this study will provide evidence-based insights into trust restoration in AI-assisted governance, with practical implications for AI policy and implementation in the public sector.



Accountability as Herding AI Cats: The Contingent Role of Decision Outcomes in Algorithmic Bureaucracy's Legitimacy

Ruoxuan LIU, Yanbing Han

Nanjing University, China, People's Republic of

The global rise of artificial intelligence (AI) in public decision-making is fundamentally reshaping state legitimacy by transferring administrative discretion from political bureaucracies to algorithmic bureaucracies. Literature highlights the need to reconcile AI's accountability-legitimacy tensions, with challenges often likened to "herding cats" due to complexity and unpredictability. Studies show algorithmic bureaucracy exacerbates "forum drifting" (eroding citizen engagement), while citizens prioritize effectiveness over transparency, creating a paradox where governments invest heavily in AI accountability despite high costs and weak public demand.

By adapting the theory of bureaucratic legitimacy to the context of algorithmic bureaucracy, we explore how AI accountability and AI effectiveness, both independently and collectively, shape citizens’ perceptions of the legitimacy of algorithmic bureaucracy. Drawing on the theory of forum drifting and procedural fairness theory, we suspect that citizens’ evaluations on algorithmic legitimacy maybe contingent on whether AI decisions can bring gains or lead loss to them. Taken together, we seek to answer the following research question: With acknowledging the decision outcomes on citizens, to whether and what extent do AI accountability arrangements affects citizens’ legitimacy perceptions of AI usage? Based the Schillemans and Busuioc’s (2014), Brummel and de Blok’s (2024), König et al. (2024)’s finding, we propose:

H1: The decision-making procedures using AI with high effectiveness leads to higher legitimacy perceptions compared to those with low effectiveness.

H2: AI decision-making procedures with explicit accountability arrangements for compliance (H2.1) (or for results (H2.2)) lead to higher legitimacy perceptions compared to those without any accountability arrangements.

H3: The accountability arrangements are more likely to enhance legitimacy perceptions when AI decisions lead to negative rather than positive outcomes.

H4: AI effectiveness is more likely to enhance legitimacy perceptions when AI decisions lead to positive rather than negative outcomes.

H5: The accountability arrangements and effectiveness function as complementary factors in enhancing AI legitimacy across both positive and negative outcome scenarios.

We conducted three factorial survey experiments with 1,539 Chinese citizens simulating housing welfare decisions: a 2 × 2 design (Effectiveness * Outcome), a 3 × 2 design (Accountability * Outcome) and a combined (3 × 2 + 3 × 2 × 2) design (Accountability * Effectiveness +Accountability * Effectiveness * Outcome). We manipulated three key factors: (1) accountability type (no accountability, accountability for compliance, and accountability for results), (2) decision outcome (negative vs. positive outcomes), and (3) effectiveness level (high vs. low). This experimental design enables us to examine whether and how different accountability arrangement and effectiveness levels influence legitimacy perceptions (operationalized as fairness, acceptance, and trust).

The Findings, particularly with the importance of context in evaluating the interplay between AI effectiveness and accountability, indicate that: (1) AI effectiveness consistently enhances legitimacy perceptions, with stronger effects in positive decision contexts; (2) AI accountability significantly boosts legitimacy only in negative decision scenarios; (3) Interaction effects demonstrate a substitution dynamic—combining effectiveness and accountability reduces marginal legitimacy gains; (4) Contextual divergence: Combined implementation maintains baseline legitimacy without additive effects in the context of favorable outcomes .



Innovation adoption preferences and decision makers’ human capital: A discrete choice experiment in healthcare

Alessandra DA ROS1, Stefano LANDI1, Claudia BAZZANI1, Gianluca MAISTRI1, Luca PIUBELLO ORSINI1, Chiara LEARDINI1, Gianluca VERONESI1,2

1Management Department, University of Verona, Italy; 2University of Bristol Business School, The University of Bristol, UK

Innovation has long been a hot topic in public administration policy making, research and practice since it is seen as a key way to improve the delivery of public services (Chen et al., 2020; Walker, 2006). Innovations such as medical technologies play a crucial role in health service provision and outcomes, but they can also lead to rising costs and adoption issues, compromising accessibility and sustainability (Cinaroglu & Baser, 2018). In this context, decisions to adopt healthcare innovations often need to be made in a short space of time and with limited supporting evidence, making the role of the decision-maker central in such cases (Grimmelikhuijsen et al, 2017).

Human Capital (HC) theory is particularly helpful in explaining the effect of individual preferences on decision making outcomes (Kor and Sundaramurthy 2009; Sundaramurthy Pukthuanthong, & Kor 2014). This is especially true in the healthcare sector, which is characterised by the presence of a kaleidoscope of multidisciplinary decision makers with a variety of professional, educational and experiential backgrounds (Smith et al., 2019). To account for this, some of the extant literature has looked at the distinction between a generalist and a specialist type of decision-maker (Datta and Iskandar-Datta 2014). The former refers to those individuals who do not have a direct role in the provision of care and lack specific education and training in healthcare (e.g., the administrative staff), while the latter relates to those who have a direct role in care provision because of their professional knowledge or their educational background (e.g. doctors, nurses and so forth). Somehow in the middle of these two types lie hybrid professional managers, essentially those healthcare professionals with specialist knowledge and experience who also acquire generalist managerial skills (e.g. doctors with managerial education or formal managerial roles) (Sarto, Veronesi & Kirkpatrick, 2019). We also consider the different experiential backgrounds that these decision makers may hold.

Building on insights from HC, in this study we aim to understand whether different human capital characteristics of decision makers produce different preferences for innovation adoption in healthcare. Through a discrete choice experiment (DCE), we empirically investigate different decision makers’ innovation adoption decisions in a simulated realistic scenario based on a new robotic medical device. The DCE is intended to extrapolate decision-makers' preferences and relate them to human capital characteristics of those taking decisions.

The pilot study so far conducted includes 43 participants. This DCE was based on an orthogonal design with two alternative choices (6 attributes, 4 with 2 levels and 2 with 4 levels). Each respondent was randomly assigned to one of three blocks of 8 choices. The preliminary results highlight differences in innovation adoption preferences among respondents with different human capital. The findings from this study will have implications for policy and practice regarding healthcare public service provision and technology adoption. Additionally, the study is expected to contribute theoretically to the field of Human Capital research. Specifically, our analysis aims to highlight differences in decision-makers' human capital that influence their choices in innovation adoption.