- The Strategist - https://www.aspistrategist.org.au -

Artificial intelligence and policing: it’s a matter of trust

Posted By on August 30, 2022 @ 12:00

From Robocop to Minority Report, the intersection between policing and artificial intelligence has long captured attention in the realm of high-concept science fiction. However, only over the past decade or so has academic research and government policy begun to focus on it.

Teagan Westendorf’s ASPI report, Artificial intelligence and policing in Australia [1], is one recent example. Westendorf argues that Australian government policy and regulatory frameworks don’t sufficiently capture the current limitations of AI technology, and that these limitations may ‘compromise [the] principles of ethical, safe and explainable AI’ in the context of policing.

My aim in this article is to expand on Westendorf’s analysis of the potential challenges in policing’s use of AI and offer some solutions.

Westendorf focuses primarily on a particular kind of policing use of AI, namely, statistical inferencing used to make (or inform) decisions—in other words, technology that falls broadly into the category of ‘predictive policing’.

While predictive policing applications pose the thorniest ethical and legal questions and therefore warrant serious consideration, it’s important to also highlight other applications of AI in policing. For example, AI can assist investigations [2] by expediating the transcription of interviews and analysis of CCTV footage. Image-recognition algorithms can also help detect and process [3] child-exploitation material, helping to limit human exposure. Drawing attention to these applications can help prevent the conversation from becoming too focused on a small but controversial set of uses. Such a focus could risk poisoning the well for the application of AI technology to the sometimes dull and difficult (but equally important) areas of day-to-day police work.

That said, Westendorf’s main concerns are well reasoned and worth discussing. They can be summarised as being the problem of bias and the problem of transparency (and its corollary, explainability).

Like all humans, police officers can have both conscious and unconscious biases that may influence decision-making and policing outcomes. Predictive policing algorithms often need to be trained on datasets capturing those outcomes. Yet, if algorithms are trained on historical datasets that include the results of biased decision-making, it can result in unintentional replication (and in some cases amplification [4]) of the original biases. Efforts to ensure systems are free of bias can also be hampered by ‘tech-washing [5]’, where AI outputs are portrayed (and perceived) as based solely on science and mathematics and therefore inherently free of bias.

Related to these concerns is the problem of transparency and explainability. Some AI systems lack transparency because their algorithms are closed-source proprietary software. But it can be difficult to render even open-source algorithms explainable—particularly those used in machine learning—due to their complexity. After all, a key benefit of AI lies in its ability to analyse large datasets and detect relationships that are too subtle for the human mind to identify [6]. Making models more comprehensible by simplifying them may require trade-offs in sensitivity [6], and therefore also in accuracy. Together these concerns are often referred to as the ‘AI black box’ (inputs and outputs are known, but not what goes on in the middle).

In short, a lack of transparency and explainability makes the detection of bias and discriminatory outputs more difficult. This is both an ethical concern and a legal one when justice systems require that charging decisions be understood by all parties to avoid discriminatory practices. Indeed, research suggests [7] that when individuals trust the process of decision-making, they are more likely to trust the outcomes in justice settings, even if those outcomes are unfavourable. Explainability and transparency can therefore be important considerations when seeking to enhance public accountability and trust in these systems.

As Westendorf points out, steps can be taken to mitigate bias, such as pre-emptively coding against foreseeable biases and involving human analysts in the processes of building and leveraging AI systems. With these sorts of safeguards in place (as well as deployment reviews and evaluations), use of AI may have the upshot of establishing built-in objectivity for policing decisions [8] by reducing reliance on heuristics and other subjective decision-making practices. Over time, AI use may assist in debiasing policing outcomes.

While there’s no silver bullet for enhancing explainability, there are plenty of suggestions [9], particularly when it comes to developing AI solutions to enhance AI explainability. Transparency challenges generated by proprietary systems can also be alleviated when AI systems are owned by police and designed in house.

Yet the need for explainability is only one consideration for enhancing accountability and public trust in the use of AI systems by police, particularly when it comes to predictive policing. Recent research [10] has found that people’s level of trust in the police (which is relatively high in Australia [11]) correlates with their level of acceptance of changes in the tools and technology used by police. In another study [12], participants exposed to purportedly successful policing applications of AI technology were more likely to support wider police use of such technologies than those exposed to unsuccessful uses, or not exposed to examples of AI application at all. In fact, participants exposed to purportedly successful applications even judged the decision-making process involved to be trustworthy.

This suggests that focusing on broader public trust in policing will be vital in sustaining public trust and confidence in the use of AI in policing, regardless of the degree of algorithmic transparency and explainability. The goal of transparent and explainable AI shouldn’t neglect this broader context.



Article printed from The Strategist: https://www.aspistrategist.org.au

URL to article: https://www.aspistrategist.org.au/artificial-intelligence-and-policing-its-a-matter-of-trust/

URLs in this post:

[1] Artificial intelligence and policing in Australia: https://www.aspi.org.au/report/ai_policing_australia

[2] can assist investigations: https://www.zdnet.com/article/nsw-police-using-artificial-intelligence-to-analyse-cctv-footage/

[3] can also help detect and process: https://www.monash.edu/it/futurist/explore/case-studies/articles/over-exposed-monash-and-federal-police-launch-lab-to-tackle-abhorrent-materials

[4] amplification: https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/#footnote-6

[5] tech-washing: https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/

[6] too subtle for the human mind to identify: https://ieeexplore.ieee.org/document/8821442

[7] research suggests: https://www.jstor.org/stable/10.7758/9781610445429

[8] establishing built-in objectivity for policing decisions: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4029138

[9] there are plenty of suggestions: https://nvlpubs.nist.gov/nistpubs/ir/2021/NIST.IR.8312.pdf

[10] Recent research: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7454338/

[11] which is relatively high in Australia: https://www.abs.gov.au/statistics/people/people-and-communities/general-social-survey-summary-results-australia/2020

[12] In another study: https://link.springer.com/article/10.1007/s11292-021-09484-9

Copyright © 2024 The Strategist. All rights reserved.