Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 14th Aug 2025, 03:52:11am BST

 
 
Session Overview
Session
PSG 1 - e-Government
Time:
Wednesday, 27/Aug/2025:
8:30am - 10:30am

Session Chair: Prof. C. William WEBSTER, University of Stirling

"AI adoption choices"

Show help for 'Increase or decrease the abstract text size'
Presentations

Open to open-source AI: Navigating AI model choice in the public sector

Nicholas ROBINSON

Hertie School of Governance, Germany

The public sector is increasingly seeing adoption of artificial intelligence (AI) tools. High quality open-source AI (OSAI) options are available, but much of the current attention in government is on proprietary options such as Copilot and ChatGPT. There are parallels with the discourse around open-source software (OSS) and proprietary software, which while used for certain functions in Agencies, has not seen widespread adoption despite backing from technical and political spheres. Proponents of open-source would indicate this has potentially increased costs while limiting competition and broad-based innovation. 

Drawing on the frameworks and evidence used to study OSS uptake, I draw on interviews with 31 decision-makers on AI adoption in Australian, Canadian and German Agencies to analyse key factors in the feasibility of open-source technologies in general and OSAI in particular, compared to their proprietary counterparts. I find organisational factors are highly influential on the acceptance of open-source, while technological attributes like usability and environmental factors like the availability of support are also important.

While these factors are also relevant for OSAI, technological characteristics like fit, control and the availability of hardware infrastructure appear more critical for the decision-making process. As AI models are relatively easy to benchmark and switch between, fears of lock-in were not as strong an influence. Furthermore, organisational considerations like digital sovereignty and data protection, which are not prominent in OSS considerations, appear more relevant in the current considerations. Agencies were also seeking central government guidance on AI model choice, but open-source communities were less relevant, although this may change as the sector evolves. Although AI is a fast evolving technology, the decision to choose between proprietary or OSAI requires significant choices to be made today — like investment in hardware and how to ensure sovereignty — that will echo into the future.



Investigating municipalities’ legitimacy considerations when deciding to make or buy public sector AI applications

Marissa HOEKSTRA1,2, Alex Ingrams2

1TNO; 2Leiden University

In the debate on legitimacy of public sector AI applications there is a strong focus on judgements of the output of the AI application. However, only focusing on the output of an AI application is not sufficient. The development process should also be assessed in terms of legitimacy, as these choices can threaten democratic procedures, and thus have an impact on legitimacy of the AI application. One of the choices in the development of public sector AI is who is involved in how the AI application gets built. Organizations can choose: to make the AI application in-house in the organization, to collaborate with another organization to build the AI application or to buy the AI application from another party. In the literature there is currently no concept that describes this aspect. Therefore this research proposes to conceptualize this choice as the concept: configurations in building public sector AI applications. It is currently unclear if and how public professionals in public sector organizations deliberately and strategically think about the choice to make, collaborate or buy public sector AI. Therefore the aim of this study is to answer the following research question: Which type of legitimacy considerations are taken into account in the decision for a certain configuration in building public sector AI applications? In order to answer this research question this study conducts a qualitative multiple case study by examining eight cases of municipal chatbots for public service delivery.



A clash of logics: Westminster budgeting for public sector AI adoption

Chloe Chadwick1, Nicholas Robinson2, Nathan Davies1

1University of Oxford, United Kingdom; 2Hertie School, Germany

There is growing interest among governments and policymakers in the potential of artificial intelligence (AI) to improve service delivery, productivity, and management (Mergel et al., 2023; Bright et al., 2024). Yet despite the proliferation of Generative AI, commoditisation of foundation models, and the popularisation of open-source options, public sector adoption continues to lag, with most efforts confined to pilot or trial phases (OECD & UNESCO, 2024).

While much academic attention has been paid to how AI could enhance public budgeting (Valle-Cruz et al., 2022), far less has been given to how existing budgeting and public financial management systems may constrain AI adoption, even when adapted for digital projects. This paper argues that current budgeting rules, shaped by legacy infrastructure funding models and New Public Management reforms, are often poorly aligned to the financial demands of AI implementation. Successful adoption requires not only upfront and sustained investment, but also strengthened internal capabilities and cross-departmental coordination, all of which challenge traditional public sector budgeting practices.

Drawing on documentary analysis and in-depth interviews with fiscal and technology decision-makers in three Westminster systems – Australia, Canada, and the United Kingdom – this empirical study identifies a deeper institutional contradiction between entrenched public budgeting logics and the iterative, uncertain nature of AI development. We find that these tensions give rise to four archetypal organisational responses, each reflecting distinct attempts to resolve these conflicts.

Building on institutional logics literature, and extending recent work in public financial management, the paper demonstrates that budgeting should not be viewed as a neutral constraint but as a critical institutional lever. Addressing current barriers requires not just alternative funding mechanisms such as tranched investment or innovation funds, but more fundamentally, institutional innovation capable of reconciling competing logics of budgeting and AI implementation.