Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 1st May 2025, 10:29:16pm EEST

 
 
Session Overview
Session
PhD Track B-1: Public management and Digital Transformation
Time:
Tuesday, 03/Sept/2024:
9:45am - 11:15am

Session Chair: Prof. Vassilis KEFIS, Panteion University
Location: Room A2

80, First floor, New Building, Syggrou 136, 17671, Kallithea, Athens.

Show help for 'Increase or decrease the abstract text size'
Presentations

Uncertain AI in high-stakes organizations: how does complexity affect AI advice usability for teams?

Brecht Weerheijm, Sarah Giest, Bram Klievink

Leiden University, Netherlands, The

Discussant: Jan VOGT (University of Mannheim)

AI-assisted teams engaging in decision-making in high-stakes and high-risks environments (i.e. naturalistic) face multiple challenges: they often lack direct feedback regarding the correctness of their decisions (Bruce & George, 2014; Straus et al., 2011), suffer from the Common Knowledge Effect (CKE) that privileges commonly held information (Stasser & Stewart, 1992; Straus et al., 2011) and face biases when utilizing AI advice (e.g. Mahmud et al., 2022). These findings show that the extent to which teams are able to utilize AI generated advice for their decision-making is dependent upon a multitude of organizational, cultural and social factors (e.g. Meijer et al., 2021; Morrison et al., 2023). Moreover, the complexity of AI system influences their useability for decision-makers (Burrell, 2016; Jiang et al., 2022; Kaplan & Haenlein, 2019; Mahmud et al., 2022). Much of the research however has been studied in laboratory settings (Mahmud et al., 2022), instead of a more naturalistic setting with actual decision-makers. This study addresses this gap by studying the extent to which teams in a naturalistic setting are able to utilize complex and uncertain AI systems and asking. “to what extent do uncertainties in AI systems affect their use for team decision-making?”. Specifically, the research tests whether a lack of verification possibilities leads to complex AI systems not being utilized by teams in a naturalistic setting, and thereby limiting the extent to which AI systems may be effectively employed in such settings.

Theoretically, this research draws strongly on previous studies conducted in the field of human-computer interaction and algorithmic/AI decision-making. At the same time, the findings from this domain are viewed from a naturalistic perspective, utilizing a macro cognitive perspective (Klein et al., 2003) and research into teamwork and group decision-making (Cuevas et al., 2007). Furthermore, this study draws on the organization sciences to accurately conceptualize the domain being studied, such as the High-Reliability Organization (HRO) framework (e.g. Roberts, 1990).

To study the question above, focus groups in a public organization in the field of security in the Netherlands that is engaged in high-stakes decision-making with little possibilities for verification are organized. About 5 focus groups with around 35 professional decision-makers will be conducted between May and September 2024, utilizing fictionalized, but highly realistic AI cases. It is expected that in cases with increased complexity, lower intended utilization will be observed due to difficulties in understanding, even if the AI system discussed offers additional benefits. This methodology was selected for its ability to enrich current theoretical understanding, which at times lacks naturalistic validity, while recognizing the ground-breaking work that has been conducted in many of these studies. Core challenges faced by the authors are mostly organizational in nature: ensuring participation from professionals who are not used to participating with scientific research and work for an organization that is under high pressure to perform is at times challenging.



Algorithmic Decision-Making in Public E-Services and its Influence on Citizens‘ Legitimacy Perceptions

Jonas Bruder

University of Mannheim, Germany

Discussant: Liz Marla WEHMEIER (Potsdam University)

The emerging application of algorithmic decision-making systems in public administration bears a wide array of challenges regarding fairness, opaqueness, and societal values. Especially the reduction or erasure of human decision-making in public e-services raises important questions, as citizens are used to being served by human public officials by default. Current research on AI and algorithmic decision-making shows that the use of algorithms also affects citizens’ perceptions of legitimacy. We applied a multilevel online discrete-choice experiment to test if algorithmic decision-making influences citizens' preferences for public e-services of different quality and their respective legitimacy perceptions of the responsible public organization. We will analyze the data from a sample of 1214 German citizens with conditional logit modeling and fixed effects regression. The results will have implications for research on algorithmic decision-making in the public sector, public e-services, and legitimacy-as-perception.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: EGPA 2024 Conference
Conference Software: ConfTool Pro 2.6.153+TC
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany