Can AI Influence Accountability? Reputational Threats, Algorithm Types, and Street-Level Bureaucrats-- Evidence from a national survey experiment
Xin XIA1, Sicheng Chen1, Tom Christensen1,2
1Tsinghua university, China; 2University of Oslo, Norway
In recent years, the integration of advanced information communication and intelligent technologies has led to the adoption of AI algorithms into the public decision-making by governments, thereby creating a mixed decision-making approach under human-AI interaction. The crux of this mixed decision-making lies in ‘keeping human-in-the-loop.’ While AI algorithms furnish decision-makers with a more comprehensive dataset, it is paramount for human decision-makers to intervene and rectify errors promptly when AI algorithms falter. This ideal paradigm assumes that human intervention is guaranteed in case of an AI algorithm's misjudgment. However, in practice, human decision-makers may display behaviors such as automation bias, selective adherence, and algorithm aversion.
Presently, our understanding of the impact of AI algorithms on human decision-makers is limited, and the significance of human decision-makers in mixed decision making is often underestimated. In the realm of public administration, when confronted with decision outputs generated by AI algorithms, how will street-level bureaucrats behave? Currently, there is a lack of research that combines algorithm types, decision types, and the interaction between decision makers and multiple actors into unified decision-making models. Moreover, the attitudes and actions of external actors, such as citizens' views on algorithmic technology and associated matters, may also influence the behavior of street-level bureaucrats in mixed decision-making scenarios. Therefore, this study adopts a decision-making actor perspective and focuses on China's street-level bureaucrats as the subject of investigation, aiming to explore the impact of reputational threats, algorithm type, and decision type on their trust and adoption of AI algorithms.
This study proposes four behavior patterns of street-level bureaucrats under mixed decision-making: trusting and implementing the AI algorithm's output, trusting but not implementing, distrusting but implementing, and distrusting and rejecting. Additionally, organizational reputation theory is employed to analyze the influence of reputation threats on street-level bureaucrats' trust and adoption of AI algorithms. Moreover, by incorporating Simon's programmed and non-programmed decision theory, the moderating effect of decision type is examined. Furthermore, the moderating effect of algorithm type is also explored, utilizing the explainable AI theory.
The research employs a randomized survey experiment methodology, involving a representative sample of 2000 street-level bureaucrats across China's 32 provincial administrative regions. Its aim is to investigate how reputational threats, algorithm type, and decision type affect street-level bureaucrats’ trust and adoption of AI algorithms.
The findings are as follows:
1. The greater the reputational threat pressure, the more likely street bureaucrats are to use algorithms to escape accountability, that is, distrust but adopt the AI algorithm’s outputs.
2. Compared with rule-driven algorithms, in the data-driven algorithm scenario, reputation threat has a greater impact on street-level bureaucrats’ trust and adoption of AI algorithms.
3. Compared with highly programmed decisions, in the face of low programmed decisions, reputation threat has a greater impact street-level bureaucrats’ trust and adoption of AI algorithms.
This paper examines the accountability of street-level bureaucrats in the era of AI, extracting implications for the regulation of digital discretion and and accountability in government mixed decision-making.
A replication of “Talk or type? The effect of digital interfaces on citizens’ satisfaction with standardized public services”
Peiyi WU
Beihang University, China, People's Republic of
Digital interfaces is considered a trend in the service delivery of smart government. To better understand how digital interfaces affects citizen satisfaction, emerging public administration research has provide evidence using experiment method. However, the effect of digital interface on citizens’ satisfaction across national cultures is overlooked in current research. Prokop and Tepe (2021) conducted an experiment in the German context, and found that a digital interface comparing with face-to-face communication has no effect on citizens’ satisfaction. This article conduct a narrow experimental replication of Prokop and Tepe (2021, Public Administration, 100(2), 427-443) in the Asian city of Shenzhen in China. This article hypothesize that the effect of digital interface differs because the countries have different culture values regarding e-government use and citizen trust.
Nudging and disclosure in the field: two experiments on food choice nudges and transparency
Robin Cuypers, Steven Van de Walle, Pieter Raymaekers
KU Leuven, Public Governance Institute
Nudges differ from other policy instruments by mostly affecting people’s decision-making without them being aware of it. This can be detrimental to both government transparency and citizen’s experience of autonomy (Bovens, 2009; Hansen & Jespersen, 2013). A logical next step, therefore, would be to increase transparency regarding the nudge. A transparent message could counteract criticism. However, earlier literature on nudging suggests that transparency can undermine the effectiveness of a nudge (Bovens, 2009). More recent literature negates this trade-off between nudges’ transparency and effectiveness (Bruns et al., 2018; Bruns & Paunov, 2021; Loewenstein et al., 2015).Overall, the nudging and transparency literature is still in its early stages, leaving numerous unanswered questions.
In this paper we study the nature of transparency through transparent messages during nudging. Drawing on insights from a mirroring online experiment (Cuypers et al., Forthcoming), we conducted two field experiments on cafeteria menus to encourage sustainable food choice behaviour and measure the effect of a salience nudge and transparent message, which discloses the presence, the purpose and the mechanism of the nudge. Using a subsequent survey, we also measure the noticeability of this transparent message.
We aim to provide new insights into the relation between transparency and nudging by measuring the effect of a salience nudge, which is underrepresented in current transparency literature. By applying the same nudge and transparent message in different settings and target groups, we focus on the heterogeneity of target groups and its potential impact. Furthermore, by explicitly focusing on ways to be transparent without compromising nudge effectiveness, we aim to support policymakers interested in applying behavioural insights while upholding transparency.
|