This Time It’s Different? AI and the future nature of public administration
Frank Edward BANNISTER1, Regina CONNOLLY2
1Trinity College Dublin, Ireland; 2Dublin City University, Ireland
“The real problem of humanity is the following: We have palaeolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.”
Edward O. Wilson
The history of e-government has, paradoxically, been marked by excessive optimism about, and failures to anticipate or predict accurately the effects of, new technology on public administration. The former we experience in advance; the latter in retrospect.
However, for at least two reasons, trying to anticipate the impact of AI on public administration has a different imperative than that for earlier technologies.
The first is the sheer scale/capacity of its potential to eliminate humans and human decision-making from much of the core process of public governance. This is already happening in what might be called the foothills of public administration where the street level bureaucrats have traditionally held sway. It is now percolating upwards into higher and broader parts of the public sphere and in particular into high level decision-making and the generation of the information on which high-level decisions are based.
The second is that this time, the techno-optimism is accompanied by a wave of techno-pessimism, dystopian views of that range from abuse of AI by powerful actors to, at the extreme, a displacement of humans completely by what will, if we get to Artificial Super Intelligence, arguably be a superior life form. There is a real risk that the speed of AI development will overwhelm our ability to put in place guardrails to protect societies and individuals against such possibilities. The same is true of public administration. At the AI conference in Paris in February 2025 the USA and the UK refused to sign a declaration that steps should be taken to ensure that AI was “safe, secure and trustworthy” (Financial Times, 11th February 2025). It would appear that the race to develop AI may result in countries throwing caution to the winds in the race to get there first - a prospect that should alarm anybody concerned with governance in our society.
Drawing on decision theory and the ideas of John Searle and others, this paper is a form of thought experiment. It will seek to explore the possible impacts of AI on the very essence of public governance and what the reaction(s) of the public to such a change might be. It is an exploration that will incorporate, but will go beyond, the questions of trust, ethics, reliability, security, accountability and control that are already the subject of much debate, proposals and fear and seek to address the two questions:
Whether it will be possible to retain traditional and critical features of public administration in this brave new world? and
Whether, without an holistic vision of what we want from public administration, we are condemned to be swept along by technology and politically and economically driven imperatives that take us when we may not want to go?
A Delphi study into the interplay between Artificial Intelligence (AI) and democracy: Mapping opportunities and threats
Jasper KARS
Utrecht University, Netherlands, The
In the early 21st century, democracies are increasingly confronted with the widespread development and application of Artificial Intelligence (AI). This technological advancement presents both opportunities and challenges for democratic governance, with the potential to either enhance or undermine democratic processes. Drawing on historical parallels, such as the influence of the printing press and digital media, AI is expected to transform politics, often benefiting certain actors or groups adept at leveraging these tools.
This paper explores the fragmented academic discourse surrounding AI and democracy, which spans legal, ethical, and societal implications. While some scholars warn of the risks posed by AI in manipulating political content and eroding democratic accountability, others argue that AI can strengthen democratic innovation by improving deliberative forums and enabling more responsive policy-making.
The complexity of this debate is further influenced by significant uncertainty due to the rapid pace of advancements in AI technology and its equally swift adoption and implementation across various sectors and domains. From a political standpoint, this aligns with the Collingridge dilemma, which underlines that emerging technologies are easier to regulate at an early stage, although their broader implications remain unclear, making the formulation of appropriate policies challenging. As technology matures and its integration into society becomes more widespread, policymakers encounter greater difficulties in developing and adopting effective regulations.
To structure this ongoing debate, this paper introduces a framework centered around the "four Ps"—polity, politics, policy, and polis—each representing a crucial dimension of AI's interaction with democratic governance. To explore future developments in this domain, the study employs the Delphi method, leveraging a variety of expert (academic) opinions to anticipate the most pressing issues in the interaction between AI and democracy over the next five to ten years. The results of the Delphi survey are mapped onto the four dimensions of the interplay between AI and democracy.
The first P, polity, examines the institutional frameworks within which AI is applied, focusing on the democratic structures that are either reinforced or weakened by AI. Politics, the second dimension, investigates how AI shapes and is shaped by power dynamics and political competition. The third P, policy, concerns the outcomes of the political process, which may include policy programs or legislation. Finally, the last dimension, the polis, refers to broader civic community and public discourse that move beyond formal institutional arrangements. This dimension explores how AI shapes civic engagement, public opinion, and the overall condition of the democratic public sphere.
By utilizing the four Ps framework, this paper offers a comprehensive approach for analyzing how AI influences—and is influenced by—democracy and democratic innovation at multiple levels. It also outlines both the positive and negative implications that may arise in this intersection in the coming years, based on expert opinions.
The Political Economy of AI
Victor BEKKERS
Erasmus University Rotterdam, Netherlands, The
If we want to understand what artificial intelligence (AI) implies for public administration, we can distinguish two perspectives. The first perspective focusses on the interplay between characteristics of AI and the course, content and effects of policy processes. The second perspective zooms out, focussing on the opportunity structure that AI provides. Who benefits? AI influences the access and distribution of power in society. This also affects the role of politics. I will address the latter question. I use of a political economy approach towards AI. It is emphasized that there are different actors in society who have divergent interests and unequal access to resources and power. A discussion about regulating AI should start with an more in-depth analysis of the power relations that lay behind the desire to regulate, thereby sketching the wider ‘power landscape’. In the literature an emerging interest in adopting a political economy perspective emerges. (Nayak & Walton, 2024; Trajtenberg, 2018; Kasy, 2023). In a political economy perspective several analytical questions are raised (Weingast et al, 2008; Bekkers & Moody, 2015). The first issue refers to the nature of power. AI is not only seen as set of technologies but is defined as a technological system. Characteristic for a systems approach to technology is that it consists of three interdependent elements: a. the technological artefact. b. activities and resources that are necessary to produce this artefact and c. the knowledge that is needed to produce and apply this artefact (Bijker et al, 1987; Hughes, 1987). What are the power resources that are related to applying AI? For instance in relation to AI not only specific AI knowledge is important but also the access to data and energy. The second question relates to understanding the power relations between relevant actors to exploit the power potential of these AI technologies, in terms of how these technologies are developed, exploited and distributed. How asymmetric are these relations? Nowadays we witness a concentration of power in the hands of ‘Big Tech’ with serious implications. The third question, understanding the power relations regarding the production and distribution of AI , give raise to political concerns, that e.g. addresses the access to these technologies, the democratic control over these technologies. Is politics able to address these issues in such away that the specific interests of AI companies can be balanced against the broader public values that relate to society as a whole other (in a binding way). Is politics able to a create a fair and proper balance? This is the last question. The latter becomes feasible if we look at the close relationship between the Trump administration and Big Tech. Who controls the governance over AI? What are the consequences of a democratic control over AI and what are relevant conditions? But also what are consequences of a more authoritarian control of AI and what are relevant conditions.
|