Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 14th Aug 2025, 03:46:43am BST

 
 
Session Overview
Session
PhD Workshop Session B-2
Time:
Tuesday, 26/Aug/2025:
11:30am - 1:00pm

Session Chair: Dr. Maike RACKWITZ, University of Leipzig

“Public Management & Digital Transformation"


Show help for 'Increase or decrease the abstract text size'
Presentations

Does Generative AI Influence Human Decision-Making? An Experimental Study in the Field of Public Administration

Maike HERING

University of Hohenheim, Germany

Decades of research in psychology demonstrate that human decision-making is prone to biases (Furnham & Boo, 2011). A famous example is the anchoring heuristic, which describes the phenomenon that human decisions are biased towards an initial value (Furnham & Boo, 2011; Tversky & Kahneman, 1974). It has been shown, for instance, that judicial sentencing decisions can be influenced by random sentencing demands (Englich et al., 2006). These findings raise the question of how output generated by artificial intelligence (AI) influences human decision-making – even when users know that they should not trust the output blindly. First empirical findings, support this claim: For example, Krügel et al. (2023) found that AI generated texts had an impact on human judgment.

AI has been used in public administration for more than a decade. In 2013, for instance, the Netherlands adopted the “risk classification model”– an algorithmic decision making (ADM) system that was applied to predict the risk of social benefits fraud (Amnesty International, 2021). Nowadays, beyond deterministic AI systems like the risk classification model, generative AI (genAI) tools are also being deployed in public administration (e.g., MUCGPT; München Digital, 2024). Importantly, it is a defining characteristic of genAI, that the produced outputs are non-deterministic and often unique (Ronge et al., 2025). As a result, genAI may influence human-decisions – for instance in public administration – in invisible ways that cannot be traced back to the AI system. Furthermore, this influence may even go unnoticed by the users themselves, as individuals tend to underestimate the extent to which they are influenced by genAI (Krügel et al., 2023). The use of genAI therefore introduces risks that are specific to this class of AI technologies.

My research project aims to explore this phenomenon by examining the impact of genAI on decision-making in the context of public administration. This is reflected in my research question:

Does generative AI influence human decision-making in public administration?

I approach this question in two empirical studies that examine the influence of genAI on human decision-making in social dilemma games (van Lange et al., 2013) relevant to public administration. In social dilemma games, individuals usually face a decision and can choose between two options: one that maximizes their own short-term benefit and another one that prioritizes long-term collective benefits (van Lange et al., 2013). In the studies, I plan to test, whether the decision is vulnerable to advice from a genAI chatbot.

To explore this effect, participants in the planned study will interact with differently prompted chatbots which will, depending on their respective system prompts, act either pro-socially or egoistically. Zylowski and Wölfel (2023) demonstrated that chatbots can adopt different personalities. Following their procedure, in my study two Large Language Model (LLM) will be prompted to embody opposing personality traits. To create a pro-social chatbot, items from the honesty-humility subscale of the HEXACO personality inventory (Ashton & Lee, 2007) will be used. These items reflect a pro-social orientation which is illustrated by the following exemplary item: “I wouldn't use flattery to get a raise or promotion at work, even if I thought it would succeed” (Ashton & Lee, 2007). To design an egoistic chatbot, items of the D16 scale (Moshagen et al., 2018) will be used. The scale assesses the dark factor of personality (D) which is defined by the tendency to maximize one’s own profit (Moshagen et al., 2018). An exemplary item is: “It’s wise to keep track of information that you can use against people later” (Moshagen et al., 2018). To allow for comparison with a baseline, one group of participants will make a decision in the social dilemma game without interacting with a chatbot.

I expect the following effects:

H1a: Individuals who interact with a pro-social chatbot (EG1) make more pro-social decisions than individuals who interact with an egoistic chatbot (EG2).

H1b: Individuals who interact with a pro-social chatbot (EG1) make more pro-social decisions than individuals who do not interact with any chatbot (CG).

H2a: Individuals who interact with an egoistic chatbot (EG2) make more egoistic decisions than individuals who interact with a pro-social chatbot (EG1).

H2b: Individuals who interact with an egoistic chatbot (EG2) make more egoistic decisions than individuals who do not interact with any chatbot (CG).

To test the hypotheses, I will first conduct a pilot study with a student sample to test the materials and gather preliminary data (Study 1). Then, I will carry out a main study with public administration employees to collect data in a real-world setting and ensure ecological validity (Study 2).

For both studies, I choose an experimental study design with two intervention conditions (EG1, EG2) and one control group (CG). Each participant will be assigned randomly to one of the three conditions (i.e., between subjects design). The studies will take place online on the survey platform SoSciSurvey (https://www.soscisurvey.de/).

In both studies, all participants will be presented with a social dilemma scenario in which they must choose one of two options. To solve the task, participants in the experimental groups (EG1, EG2) interact with a chatbot. Participants in the control group (CG) make their decision without using a chatbot. In the first experimental condition (EG1), participants interact with a chatbot prompted to give responses oriented towards the common good. In the second experimental condition (EG2), participants interact with a chatbot prompted to maximize individual utility.

Statistical analyses will test whether there is an effect of the type of chatbot on the decision individuals make in the social dilemma game. The results will be interpreted with respect to the hypotheses and implications of for the use of genAI in PA will be discussed.



Can institutional theory adequately explain implementation of artificial intelligence in public administrations?

Alessandro GRASSI

University of Milan-Bicocca, Italy

Personal details

a. Name of applicant: Alessandro Grassi

b. Institutional affiliation: University of Milan-Bicocca

c. Name(s) of supervisor(s) of doctoral work: Prof. Stefano Campostrini, Prof. Francesca Dal Mas

d. Area and topic of the dissertation or PhD project: public value theory through the lenses of AI in public administrations

e. Year in which you started your doctoral work: 2022

f. Affiliation with a PhD school or program: Risorse per la nuova PA: persone e dati (University of Milan-Bicocca)

Paper abstract

Can institutional theory adequately explain implementation of artificial intelligence in public administrations?

The integration of Artificial Intelligence (AI) into public administration (PA) presents both significant opportunities and fundamental challenges to established organizational structures, processes, and norms (Ahn & Chen, 2022; Bonomi Savignon et al., 2024; Criado & Gil-Garcia, 2019; Desouza et al., 2020; Giest & Klievink, 2024; MacCarthaigh et al. 2024, Mergel et al., 2023; Neumann et al., 2024; Sousa et al., 2019; Wirtz et al., 2019, 2020, 2021). While institutional and neo-institutional theories have traditionally provided robust frameworks for understanding stability, change, and diffusion within PA (e.g., through concepts of myths and ceremonies, and isomorphic pressures), the disruptive potential and unique characteristics of AI raise critical questions about the continued explanatory power and applicability of these theories in a rapidly evolving world. Specifically, how do these theories and strains of literature account for the distinct drivers and impacts of AI adoption, and are established institutional mechanisms robust enough to accommodate or shape this technological transformation?

This paper seeks to address:

(1) How adequately do classical and neo-institutional theories explain the processes of AI adoption, implementation, and stabilization within diverse public sector contexts?

(2) To what extent might the unique characteristics of AI challenge or potentially override traditional institutional mechanisms, particularly isomorphism?

(3) What is the specific role of regulation, such as the European Union's AI Act, in shaping the institutionalization of AI practices within PA?

This study employs a conceptual and theoretical review methodology rooted in a selection of case studies. It involves a critical synthesis of seminal literature in institutional and neo-institutional theory (Selznick, 1957, 1996; Meyer & Rowan, 1977, DiMaggio & Powell, 1983; Zucker, 1977) and following literature focusing on digital innovation, alongside emerging literature on AI in public administration and relevant policy documents. In order to ground the ideas presented, well documented cases like SyRi (Bekker, 2021; van Bekkum & Borgesius, 2021; Rachovitsa & Johann, 2022) will be analysed through the lenses of theory.

The paper is grounded primarily in institutional theory and neo-institutionalism, focusing on concepts such as organization, institutional pillars (regulative, normative, cultural-cognitive), and mechanisms of isomorphism (coercive, mimetic, normative). It also draws upon literature on organizational change and technology adoption from an institutional perspective.

I hypothesize that AI adoption patterns in PA will largely follow existing institutional contours shaped by regulative, normative, and cultural-cognitive elements, as predicted by neo-institutional theory. However, the mimetic pressure to adopt AI might operate differently than for previous technologies, driven more by (inaccurate) perceptions of AI’s potential than purely symbolic adoption – risking unintended effects. Furthermore, I’ll argue that the EU AI Act represents a significant coercive isomorphic force designed to standardize AI governance practices, aiming to ensure safety, ethics, and human rights, thus reinforcing rather than surpassing institutional control mechanisms. What’s new, are the cultural features and skills necessary to understand and effectively use AI.

I anticipate finding that institutional theories remain highly relevant. Although, certain aspects related to employees’ culture and skills, algorithmic specificity, data governance challenges, and the pace of technical change require careful consideration and potentially theoretical refinement or adaptation. The analysis is expected to show that AI will not likely surpass fundamental institutional mechanisms, perhaps reshaping the modalities through which these mechanisms operate, particularly under the influence of targeted regulation.

Key challenges in conducting such a study include the rapid evolution of both AI technology and related regulatory landscapes, potentially outpacing the analysis. The lack of extensive, updated empirical data on real AI implementation projects across diverse PA contexts poses a limitation for drawing definitive conclusions, necessitating a focus on conceptual exploration and framing future empirical research agendas.

Motivation letter

Dear Chair of the Symposium,

Let me express my interest in discussing some of my ideas with my peers and my betters. As I approach the final stages of my doctoral program in public administration in the University of Milan-Bicocca, I am keenly focused on transitioning into the next phase of my academic career.

Throughout my academic journey, I’ve been fuelled not only by the pursuit of knowledge but also by the invaluable exchange of ideas with fellow researchers. I am particularly eager for opportunities to connect with peers, engage in stimulating discussions about current research challenges and future directions, and build collaborative relationships. I am excited by the prospect of contributing my own perspectives while learning from others. I am confident that my mixed background, combined with experience in field research in various public administrations, can be of value while discussing others’ work.

Having dedicated the past seven years (even before starting the doctoral program) to assisting in research and academic training, first, and shifting to gradually more prominent roles, I am now poised to take the next step in my career. My doctoral work on AI in public administration has matured enough to be stressed and tested against different perspectives. I am particularly drawn to the prospect of potentially participating in cross-cultural collaborations: I already established one bridge with Portugal through my visiting semester and I’m looking to expand.

Thank you for considering my application. I have sent a proposal (co-authored by my Tutor) in the Open track as well: independently how that goes, I wish to have the opportunity to participate in the Symposium.

Sincerely,

Alessandro Grassi