Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 16th Aug 2025, 12:03:41am BST

 
 
Session Overview
Session
PSG 5 - The Politics and Management of Policing and Public Safety
Time:
Wednesday, 27/Aug/2025:
1:30pm - 3:30pm

Session Chair: Prof. Eckhard SCHROETER, German University of the Police

 "Policing and technologies"


Show help for 'Increase or decrease the abstract text size'
Presentations

From legal to social legitimacy: An exploratory study of citizen acceptance of online surveillance by police

Willem BANTEMA

NHL University of Applied Sciences, Netherlands, The

As online surveillance tools become increasingly integrated into public safety strategies, the role of police in digital spaces raises pressing questions about legitimacy, authority, and public trust. While legal frameworks for surveillance are often discussed, less is known about how citizens perceive and evaluate these practices—particularly when surveillance occurs in spaces they consider private, such as closed social media groups or chat-apps. This paper investigates the conditions under which citizens accept or reject police online monitoring, and explores the conceptual boundaries between public and private digital domains.

The study draws on a mixed-method design combining two surveys (N=261), 20 semi-structured interviews with civilians, and exploratory conversations with police professionals. In addition, citizen perspectives expressed on online forums and within political debates on privacy and surveillance are analysed. This multi-source approach allows for a layered understanding of public sentiment and the tensions between public safety and individual rights.

Preliminary findings show that citizen acceptance is shaped by transparency, perceived necessity, the identity of the surveilling authority, and the purpose of the monitoring (e.g., targeted prevention vs. broad oversight). Acceptance increases when monitoring is seen as essential for public safety, but concerns around misuse of data, loss of autonomy, and ambiguity about digital “publicness” remain strong. Notably, many respondents draw a clear distinction between open platforms and semi-private or private online spaces, where surveillance is more often perceived as intrusive, even when conducted for preventive purposes.

The paper also engages with the conceptual ambiguity surrounding the term “private” in the context of digital platforms. Citizens’ expectations of privacy in closed groups do not always align with legal or institutional definitions, creating friction between perceived and formal legitimacy.

These findings contribute to broader debates on policing and digital governance, highlighting the need for transparency, clear purposes, and citizen rights to be balanced in developing socially legitimate policies for online police surveillance. By situating citizen perceptions within the context of public order governance, this study offers practical and theoretical insights into how police and public institutions can navigate the blurred boundaries of digital authority in a democratic society.



Exploring the ethics and risks of using Artificial Intelligence to improve policing productivity

Paul WALLEY, Helen Glasspoole-Bird

The Open University, United Kingdom

Introduction

The current UK government is placing considerable emphasis on the use of Artificial Intelligence (AI) to improve productivity in the public sector, including policing, with an AI strategy (OAI, 2021). The National Audit Office (NAO, 2024) also stated some policy aims including:

1. The UK public sector will be world-leading in safe, responsible and transparent use of AI to improve public services and outcomes.

2. The public will benefit from services that have been transformed by AI.

3. Public and civil servants will have the tools, information and skills they need to use AI.

4. All public organisations will be more efficient and productive through AI adoption and have the foundations in place to innovate with the next wave of technologies

These policies have generated considerable interest in applications of Artificial Intelligence (AI) to policing processes as a potential source of productivity improvement. Some existing results have been mixed (Stanley, 2024). They point towards AI hallucinations and some loss of discipline in policing.

This paper reports on the findings of a case -based study that reviewed a prototype AI application to write witness reports. The application converts the audio of witness interviews into a series of required documents including the “MG11” witness statement.

Research Questions

This paper focuses on the risks and ethics raised during the study. The research questions we ask here are:

1. What are the risks, if any, of the use of AI in policing applications to replace some of the functions of police officers in tasks such as witness statement authorship?

2. Are there any ethical issues raised using AI to assist witnesses and officers to produce these reports?

Methodology

A mixed-methods approach was used in the study:

1. Semi-structured interviews were held with users, police stakeholders and developers of the AI to obtain an understanding of their perceptions of the utility of the application.

2. The quality of a sample of 26 reports (AI or human written) were assessed using established methods of understanding report readability and linguistic style (Friginal and Biber, 2016).

Findings

Key findings to be debated are:

• The risks of the output concerns subtle errors, omissions or hallucinations that may be difficult to detect

• There is a loss of the voice of the witness when using AI

• The impact on output quality, with a move to standardisation of work rather than high quality output

• The impact on police roles when using AI

References

Friginal, E. and Biber, D. (2016) Multi-dimensional analysis. Eds. In: Baker, P., Egbert, J. (Eds.), Triangulating Methodological Approaches in Corpus Linguistic Research. Routledge, Abingdon, 73–89.NAO (2024), Use of artificial intelligence in government, National Audit Office 12 March pp56.OAI (2021), National AI Strategy, UK Office for Artificial Intelligence, September 2021Stanley, J. (2024) AI Generated Police Reports Raise Concerns Around Transparency, Bias, ACLU Report, December, AI Generated Police Reports Raise Concerns Around Transparency, Bias | ACLU



Artificial Intelligence and Migration Control: New Challenges and Perspectives for Public Administration in Europe.

Alberto MESSINA

Università degli Studi di Palermo, Italy

The growing integration of Artificial Intelligence (AI) into European migration governance is reshaping the role and responsibilities of public administrations. This paper critically investigates how AI-based systems are increasingly employed in the surveillance and management of external borders, and explores their implications for fundamental rights, institutional accountability, and the evolving mandate of public authorities across Europe.

Starting from an analysis of the current legal and policy framework, the study maps the adoption of technologies such as biometric recognition systems, risk profiling algorithms, and automated decision-making tools—particularly within EU platforms like ETIAS, EES, SIS, and EUROSUR. These infrastructures allow for the preemptive screening of third-country nationals and the cross-border circulation of personal data, reinforcing a model of digitalized migration management deeply embedded in public administrative practice.

While such systems are often presented as tools for enhancing efficiency and security, the paper highlights the risks they pose to core legal guarantees, including the right to asylum, the principle of non-refoulement, and the right to an effective remedy. The opacity of algorithmic logic, the potential for discriminatory outcomes, and the erosion of procedural safeguards raise crucial questions for European public administrations tasked with implementing and overseeing these technologies.

Drawing on recent jurisprudence from the European Court of Human Rights and the Court of Justice of the European Union, the paper underlines the growing importance of human rights impact assessments and judicial review mechanisms in mitigating the risks associated with high-risk AI systems. Nonetheless, the analysis of the final version of the AI Act (Regulation EU 2024/1689) reveals persistent normative gaps. Despite the regulation’s ambitions, it fails to impose sufficient safeguards on AI applications in migration control, granting broad discretionary powers to public authorities without clear transparency obligations or effective oversight.

The paper argues that this regulatory shortfall reflects deeper tensions within the European public administration landscape—between innovation and rights protection, security logic and legal certainty, efficiency and democratic accountability. It also points to the urgent need for public administrations to develop the institutional capacity, normative awareness, and technical competence necessary to govern AI systems responsibly, especially when they affect vulnerable populations.

By combining legal analysis with public policy reflection, this research contributes to the broader debate on the future of public administration in Europe. It invites policymakers, scholars, and practitioners to critically engage with the digital transformation of migration governance and to shape a regulatory environment that balances technological progress with fundamental rights and the principles of good administration.

This paper aligns with the goals of the EGPA 2025 Conference by offering a forward-looking perspective on the challenges and responsibilities of European public administrations in the age of AI, and by proposing concrete avenues for ensuring legal compliance, ethical use, and institutional resilience in the face of technological change.