Introduction
The current UK government is placing considerable emphasis on the use of Artificial Intelligence (AI) to improve productivity in the public sector, including policing, with an AI strategy (OAI, 2021). The National Audit Office (NAO, 2024) also stated some policy aims including:
1. The UK public sector will be world-leading in safe, responsible and transparent use of AI to improve public services and outcomes.
2. The public will benefit from services that have been transformed by AI.
3. Public and civil servants will have the tools, information and skills they need to use AI.
4. All public organisations will be more efficient and productive through AI adoption and have the foundations in place to innovate with the next wave of technologies
These policies have generated considerable interest in applications of Artificial Intelligence (AI) to policing processes as a potential source of productivity improvement. Some existing results have been mixed (Stanley, 2024). They point towards AI hallucinations and some loss of discipline in policing.
This paper reports on the findings of a case -based study that reviewed a prototype AI application to write witness reports. The application converts the audio of witness interviews into a series of required documents including the “MG11” witness statement.
Research Questions
This paper focuses on the risks and ethics raised during the study. The research questions we ask here are:
1. What are the risks, if any, of the use of AI in policing applications to replace some of the functions of police officers in tasks such as witness statement authorship?
2. Are there any ethical issues raised using AI to assist witnesses and officers to produce these reports?
Methodology
A mixed-methods approach was used in the study:
1. Semi-structured interviews were held with users, police stakeholders and developers of the AI to obtain an understanding of their perceptions of the utility of the application.
2. The quality of a sample of 26 reports (AI or human written) were assessed using established methods of understanding report readability and linguistic style (Friginal and Biber, 2016).
Findings
Key findings to be debated are:
• The risks of the output concerns subtle errors, omissions or hallucinations that may be difficult to detect
• There is a loss of the voice of the witness when using AI
• The impact on output quality, with a move to standardisation of work rather than high quality output
• The impact on police roles when using AI
References
Friginal, E. and Biber, D. (2016) Multi-dimensional analysis. Eds. In: Baker, P., Egbert, J. (Eds.), Triangulating Methodological Approaches in Corpus Linguistic Research. Routledge, Abingdon, 73–89.NAO (2024), Use of artificial intelligence in government, National Audit Office 12 March pp56.OAI (2021), National AI Strategy, UK Office for Artificial Intelligence, September 2021Stanley, J. (2024) AI Generated Police Reports Raise Concerns Around Transparency, Bias, ACLU Report, December, AI Generated Police Reports Raise Concerns Around Transparency, Bias | ACLU