By Samuel Hoar
Predictive Policing
Derek Chauvin’s murder of George Floyd on 25 May 2020 sparked a wave of global protests against police racial discrimination and broader racial inequalities. In this light, concerns about the practice of predictive policing have gathered momentum and received greater public attention. This piece evaluates the supposed benefits of predictive policing and its discriminatory side-effects, and asks if a future exists in which it can be used to help create safer societies that are unaffected by police bias.
What is Predictive Policing?
Predictive policing involves the collection and analysis of data about previous crimes to identify and predict individuals or geospatial areas with an increased probability of criminal activity (Meijer and Wessels, 2019). In other words, it tries to predict when and where crimes will occur in the future by using algorithms that analyse previously committed crimes. Two broad types of predictive policing exist. The first is location-based; these tools use algorithms to forecast when and where crimes are more or less likely to happen by analysing historical data about dates, places, crime rates, weather, and large events (Heaven, 2020). The algorithm then generates geographical ‘hot spots’ where crime is deemed more likely to occur, and these areas are targeted by police patrols. The second type is individual-based; these tools evaluate data on people, such as their age, gender, marital status, history of substance abuse, and criminal record, to predict the chance of someone becoming involved in criminal activity (Ibid.).
A Brief History
Statistical and geospatial crime forecasting dates back to the 1940s, when the criminologists Shaw and McKay linked juvenile delinquency to, amongst other things, inadequate housing, poverty and mental disorders (Strikewerda, 2020). The 1990s saw the first uses of computerised predictive policing; Compstat provided early versions of ‘heat maps’ that depicted geographical areas of expected high and low crime rates (Ibid.). It was not, however, until the late 2000s that these tools gained widespread usage. In the post-2008 economic crisis, predictive policing technology increased in appeal because it promised greater efficiency in a time of limited police resources (Sassaman, 2020). The Los Angeles Police Department (LAPD) was an early adopter of these tools, using a variety of different programs from 2008, including LASER and PredPol; PredPol is a major predictive policing technology company that works by splitting maps into 150-by-150-metre boxes and using historical data to calculate likely crime ‘hot spots’ in these geospatial areas (Lau, 2020; Lorinc, 2021). The New York Police Department (NYPD) began to use PredPol (amongst other software) soon after, until it developed its own predictive policing algorithms in 2013, and the Chicago Police Department (CPD) followed by using person-based predictive policing software to create a ‘heat list’ of ‘strategic subjects’ who were likely to commit gun crime based on demographics, arrest-history, and social media details (Lau, 2020). By 2019, this system had assigned ‘high-risk’ scores to over 400,000 people and was viewed by the CPD as their primary tactic to prevent violent crime (Lorinc, 2021).
Predictive policing has since spread far beyond US borders. CAS, a geographical predictive policing tool, has been used in the Netherlands since 2019, marking the first time a country has used predictive policing on a nation-wide scale (Strikwerda, 2020). In similar style to PredPol, CAS splits areas on maps into grids, and uses the ‘near-repeat’ concept – the idea that crime is likely to occur where it has recently happened – to predict where future crime is more or less likely to occur (Ibid.). At least fourteen Police Departments in the UK are using predictive policing tools (as of February 2019), as are Canadian police forces in Vancouver, Edmonton and Saskatoon, and Australian police forces in several territories (Strikwerda, 2020; Lorinc, 2021).
The Appeal of Predictive Policing
Police forces have always attempted to predict when and where crimes will happen, and who may commit the crimes; proactive policing (or ‘upstream prevention’) is clearly preferable to reacting to crimes once they have happened (Meijer and Wessels, 2019). Using predictive tools arguably allows police forces to more effectively prevent crimes from occurring. NYPD Police Commissioner William J. Bratton was brash about this when he lauded the newest release of Compstat in 2015, comparing the technology to the semi-human precogs in the Tom Cruise film Minority Report (Harvard University Press Blog, 2020).
There are specific crimes for which predictive policing works particularly well. Burglary is a good example; it is highly likely that one burglary in a neighbourhood will be closely followed by several more, and sophisticated predicting tools can therefore accurately predict when and where burglars will be active in the near-future (Stilgherrian, 2021). Similarly, police officers have long tried to surveil and connect gang members, but thanks to predictive policing, according to sociologist Roderick Graham, they can now do so using ‘more precise statistical methods’ (Ibid.). Individual profiling can thereby identify members of criminal groups and potential future offenders, enabling police forces to prevent crimes from happening (Van Brakel and de Hert, 2011). Moreover, it has been demonstrated that the risk of an individual committing a crime increases for a period of time if they are socially connected to a convicted offender, further indicating that individual-based profiling may help to prevent crime (Kump Et. Al, 2016).
Police officers will inevitably hold both conscious and unconscious biases. Advocates of predictive policing argue that using neutral, quantitative data will prevent these biases from interfering with policing: Bratton describes predictive policing as ‘discriminating, not discriminatory… precise, not prejudiced’ (Harvard University Press Blog, 2016). Supporters of predictive policing also believe that it allows police departments to work more accurately, decrease crime rates, and become more cost efficient (Strikwerda, 2020). There is some evidence to support this. Metro21: Smart Cities Institute at Carnegie Mellon University, which developed the predictive policing program used by the Pittsburgh Bureau of Police, claimed that violent crime had decreased by 34% in the crime ‘hot spots’ its software had identified (Coleman, 2020). Moreover, in control trials conducted in Los Angeles and Kent, Mohler et al. found that the configuration of police patrols with predictive policing forecasts resulted in a decrease in crime as a function of patrol time of 7.4% (Meijer and Wessels, 2019). In another study, it was demonstrated that ‘disorder-related’ Twitter posts were reflective of crime incidents in London and could be used to help predict crime (Lorinc, 2021). Such findings are quoted by advocates of predictive policing as proof of its value.
The Downside: ‘Garbage In, Garbage Out’
The primary issue with predictive policing is that it perpetuates discriminatory policing practices whilst wearing a veneer of technological neutrality. For predictive policing tools to reduce police bias and accurately predict crime, the quantitative data on which the algorithms run must be accurate and neutral. Unfortunately, it is not.
A 2019 study by the AI Now Institute described how the predictive policing tools used by some police departments ran on data that was ‘derived from or influenced by corrupt, biased, and unlawful practices’, including both discriminatory policing and manipulation of crime statistics (Lau, 2020). One explanation for this is that arrest data is often used to train predictive policing tools. For example, the CPD’s infamous ‘heat list’ system was generated solely on the basis of arrest records (Lorinc, 2021). Arrest records do not give an accurate depiction of criminal activity; black Americans are five times as likely to be stopped by the police without just cause as white Americans, and are twice as likely to be arrested, according to US Department of Justice figures (Heaven, 2020). The UK faces similar issues. BAME people are stopped 2.4 times as frequently as white people by the Metropolitan Police, a number which rises to over 4% in Gloucestershire and over 5% in Suffolk (Wright, 2021). Arrest data that trains predictive policing tools is not neutral; it instead encodes racist policing into algorithms and allows a discriminatory past to shape the future.
Victim report data, as used by some tools, create discriminatory and inaccurate predictions for similar reasons (Heaven, 2021). As demonstrated by Nil-Jana Akpinar, Alexandra Chouldechova, and Maria De-Arteaga, who built their own predictive policing tool that closely resembled the algorithm used by PredPol, predictive policing tools make significant crime prediction errors because black people are significantly more likely to be reported for a crime (or even non-criminal activity) than white people are (Ibid.). Some predictive policing tools also incorporate data from where calls to the police have been made. This is problematic because black people are much more likely to be called to the police than white people are. Think of the instance of white Amy Cooper unjustifiably calling the police on black bird-watcher Christian Cooper in New York’s Central Park; of a white Yale student calling the police on a black student sleeping in a common area; or the white woman who called the police on a black family having a barbeque in a park that allowed barbecuing (Heaven, 2020; Weaver, 2018). These examples demonstrate that the data that feed these tools is not neutral, and therefore that predictive policing actually increases discriminatory policing.
Predictive policing tools used in the US are prevented by law from using race as a predictor, but variables that they do consider, such as socioeconomic background, education, and zip code, create de facto racial consideration (Heaven, 2020). As Roderick Graham states, ‘because racist police practices overpoliced black and brown neighborhoods in the past, this appears to mean these are high crime areas, and even more police are placed there’ (Stilgherrian, 2021). Historical discriminatory police data thereby ‘replicates and supercharges bias in policing by sending police to places that they’ve policed before’, increasing the overpolicing of non-white neighbourhoods (Sturgill, 2020, quoting Matt Cagle, technology and civil liberties attorney with the American Civil Liberties Union of Northern California). Increasing police presence in these areas is not only unjustified but increases the number of instances of discriminatory policing; identifying an area as a ‘hot spot’ readies patrol officers for trouble, and preconditions them to expect crime rather than to respond to realities (Babuta and Oswald, 2019). Hamid Khan, founder of the Stop LAPD Spying Coalition, explains this point powerfully; ‘location-based policing is a proxy for racism. They’re not there to police potholes and trees. They are there to police people in the location. So the location gets criminalized, people get criminalized, and it’s only a few seconds before the gun comes out and somebody gets shot’ (Lorinc, 2021). Legal scholars such as Andrew Ferguson also worry that predictive policing engenders illegitimate suspicion in patrolling officers, and the use of this technology is therefore potentially unconstitutional (Lau, 2020).
By inputting ‘dirty data’ into predictive policing tools, individual officer bias is merely replaced (or exacerbated) by data-driven, structural bias (Ibid.). As Tarik Aougab (one of the mathematicians who collectively called for fellow researchers to boycott all work related to predictive policing) explains, predictive policing creates ‘huge structural bias’ whereby the types of crime, the location of crime, and the perpetrators of crime are predicted by misleading and racially skewed data (Linder, 2020). This issue transcends America’s borders. If you were to search for offensive language crimes in Australian police databases, it would appear as if it is only Indigenous people who swear, because it is only Indigenous people who are charged by the police for swearing (Stilgherrian, 2021, quoting Bennett Moses, director of the Allens Hub for Technology, Law and Innovation at the University of New South Wales). Unsurprisingly, when the New South Wales Bureau of Crime Statistics and Research evaluated the performance of New South Wales Police’s predictive policing tool (SMTP-II), it found that the roughly 10,100 individuals subject to SMTP-II since 2005 were ‘disproportionately Aboriginal’ (Ibid.). This example neatly demonstrates the ‘Garbage In, Garbage Out’ problem that currently corrupts predictive tools; inputting low quality or misleading data into a system produces low quality or misleading results (Garvie, 2019). Even the types of crime that predictive policing tools predict will occur are warped by the input data. Whilst some crimes, such as domestic abuse, are typically underreported and are therefore not fed into predictive policing algorithms, crimes for which black people are disproportionately accused, such as using counterfeit bills, are more likely to be flagged by predictive policing systems (Strikwerda, 2020; USSC, 2014). Predictive policing tools are thus corrupted by a history of discriminatory policing that prevents them from acting with neutrality.
Predictive Policing is Largely Ineffective
Predictive policing tools often have a negligible impact on crime rates, making it even more difficult to justify their usage. Numerous reports and evaluations of different predictive policing models have argued that the tools provide few tangible results. Perhaps the most extensive independent evaluation of predictive policing so far, the RAND report, concluded that the most sophisticated predictive policing technologies produce an ‘incremental at best’ advantage over other policing methods, and that they tended to ‘show diminishing returns’ (Hvistendahl, 2016). The RAND Institute also conducted a study of the CPD ‘Strategic Subjects List’ (SSL) and found it to be ineffective (Strikwerda, 2020). Similarly, a review of predictive policing by the Institute for International Research on Criminal Policy found that whilst in some places it helped to decrease crime rates, in others it made a negligible difference (Lorinc, 2021). Moreover, an evaluation of CAS by the Dutch Police Academy concluded that a decrease in Amsterdam burglaries did not correlate with the predictions CAS had made (Strikwerda, 2020). It is very difficult to defend something that enhances discriminatory policing whilst generating few substantive results.
Transparency Concerns
Police forces across the globe are refusing to be transparent about how their predictive policing tools work. Details of how LAPD’s predictive policing tools functioned were only revealed after many years of activism that demanded greater transparency (Lau, 2020). Similarly, it took years of legal efforts, led by the Brennan Centre, to obtain documents under the Freedom of Information Law that revealed some information about the NYPD’s in-house predictive policing algorithm (Ibid.). It took a journalistic investigation by The Verge to reveal that secretive data-mining firm Palantir had developed the predictive tool used by police in New Orleans; this company also received a $2.5 million payment from the NYPD that the police department refuses to explain (Heaven, 2020). Another journalistic investigation, alongside a data breach, revealed that police forces in Queensland, Victoria, and South Australia all used tools provided by ClearView AI, the controversial company with links to Palantir (Stilgherrian, 2021). Australia’s Victoria Police Force still refuses to reveal how its predictive policing algorithm works, and has not even revealed its name (Ibid.).
There are explanations for this secrecy. The common reluctance to reveal how the algorithms work is explained by the desire of police forces to protect the efficacy of the tools; police forces do not want potential criminals to understand how the technology works, however unlikely it is that they will have the technical literacy required to comprehend such sophisticated algorithms. The companies that produce the technology defend their secrecy by stating that they cannot share information about their algorithms because it would reveal confidential information about the people that the tools have assessed, and that it would also infringe upon intellectual property rights (Lorinc, 2021). This allows these algorithms to become ‘black boxes’ which private companies sell to police forces with no technical explanation required; predictive policing tools are thus not always properly understood by those using them, and are protected from being audited or evaluated by anyone on the outside (Ibid.). For predictive policing tools to become more palatable, there must be greater transparency about how they work.
How Can Predictive Policing Be Improved?
There is growing international awareness that predictive policing is currently unfit for purpose. The United Nations Committee on the Elimination of Racial Discrimination has concluded that predictive policing systems that rely on historical data ‘can easily produce discriminatory outcomes’ (Stilgherrian, 2021). The Vice President of the EU commission has meanwhile stated that use of predictive policing tools is ‘not acceptable’ (Ibid.). In the US, the LAPD stopped using PredPol technology after the Stop LAPD Spying Coalition discovered that flaws in the algorithm unjustly labelled many black or Latino citizens ‘high risk’ status (Lorinc, 2021). Pittsburgh Police has suspended its ‘hot spot’ program, and the CPD has also abandoned its controversial predictive policing tool. Santa Cruz, one of the first cities to adopt predictive technology, banned its usage in 2020, with the city’s police chief Andy Mills describing the discriminatory impact of the technology as a ‘blind spot he didn’t see’ (Sassaman, 2020).
A useful way to extract value out of these tools may be to repurpose them, or to combine them with other policing approaches. For example, the Chicago Violence Reduction Strategy identifies individuals at risk of becoming either violent offenders or victims and provides them with access to social services and employment assistance (Hvistendahl, 2021). Some of these individuals appear on the SSL, but most are selected through classic observation techniques. In another example, Pittsburgh’s predictive policing tool was discontinued following the realisation that it was enhancing systemic inequalities and disproportionately increasing police presence in black and Latino areas (Coleman, 2020). In response, Mayor Bill Peduto recommended that identified ‘hot spots’ could be managed by the Office of Community Health and Safety to allow ‘public safety to step back and determine what kind of support an individual or family needs…hot spots may benefit from the aid of a social worker, service provider or outreach team, not traditional policing’ (Ibid.). As Christopher Deluzio, one of those involved in the repurposing of Pittsburgh’s predictive policing tools, explains, ‘if there are tools that can tell us where things are happening that require interventions, why not send services?’ (Ibid.). Similarly, Erin Dalton, the deputy director for the Office of Analytics, Technology and Planning at the Allegheny County Department of Health and Human Services, has developed an algorithm, comparable to predictive policing tools in many ways, that identifies which homeless people should be prioritised for rapid rehousing or permanent supportive housing (Ibid.). Altering the purpose of predictive policing technology could prove to be an inspired solution for a myriad of social issues.
A more radical solution is to incorporate affirmative action into predictive policing tools. This approach was taken during a study by Lowenkamp and Skeem, who examined different ways in which the predictor bias in these tools can be removed (Skeem and Lowenkamp, 2015). They concluded that optimal results were produced when algorithms assigned black people a higher threshold than white people for being high risk. This conclusion is easy to understand. Police racial discrimination against black people pervades the existing input data, so to counteract black people and neighborhoods being unfairly labelled as high-risk, the level at which they are deemed high-risk should be higher than for white people and neighbourhoods. Existing tools are currently legally prohibited from doing this, and this both a legally and morally controversial idea.
Conclusion
Predictive policing tools are currently failing. The desire of police forces to decrease crime, racial bias, and costs is both admirable and necessary, but these tools are currently creating unintentional negative feedback loops of prejudice and over policing. This is caused by the tools’ reliance on historic data that is tainted with police racial bias and prejudice. This data fuels these tools, further encouraging police officers to expect to find crime where there is none, or to unreasonably predict that an individual will commit a crime in the future. Although some predictive policing tools have decreased crime rates, many have not, and it is thus difficult to support a concept that currently enhances discriminatory policing whilst generating few substantive results. This is not to say that there is no place for such algorithms in the future, if they are adapted or repurposed. The veils of secrecy that shroud these tools must also be lifted, to enable both police forces and external auditors to fully understand their algorithms. If these things happen, predictive policing may yet have a future role contributing to fairer and safer societies.
Bibliography
Babuta, Alexander and Oswald, Marion (2019), ‘Data Analytics and Algorithmic Bias in Policing’, Royal United Services Institute, September.
Coleman, Emma 2020, ‘One City Rejected a Policing Algorithm. Could It Be Used For Other Purposes?’, Route Fifty, July 16.
Garvie, Claire (2019), ‘Garbage In, Garbage Out: Face recognition on flawed data’, Georgetown Law Center on Privacy and Technology, May 16.
Grierson, Jamie (2019), ‘Predictive policing poses discrimination risk, thinktank warns’, The Guardian, September 16.
Harvard University Press Blog (2020), extract from Khalil Gibran Muhammad, The Condemnation of Blackness: Race, Crime, and the Making of Modern Urban America, June 26.
Heaven, Will Douglas (2020), ‘Predictive policing algorithms are racist. They need to be dismantled’, MIT Technology Review, July 17.
Heaven, Will Douglas (2021), ‘Predictive policing is still racist—whatever data it uses’, MIT Technology Review, February 5.
Hunt, Joel (2019), ‘From Crime Mapping to Crime Forecasting: The Evolution of Place-Based Policing’, National Institute of Justice, July 10.
Hvistendahl, Maria (2016), ‘Can ‘predictive policing’ prevent crime before it happens?’, American Association for the Advancement of Science, September 26.
Kump, Paul., Alonso, David., Yang, Yongyi., Candella, Jamie., Lewin, Jonathan., and Wernick, Miles (2016), ‘Measurement of repeat effects in Chicago’s criminal social network’, Applied Computing and Informatics, (Volume 12, Issue 2).
Lamb, Evelyn (2016), ‘Review: Weapons of Math Destruction’, Scientific American, August 31. Lau, Tim (2020), ‘Predictive Policing Explained’, Brennan Centre for Justice, April 1.
Linder, Courtney (2020), ‘Why Hundreds of Mathematicians Are Boycotting Predictive Policing’, Popular Mechanics, July 20.
Lorinc, John (2021), ‘From facial recognition, to predictive technologies, big data policing is rife with technical, ethical and political landmines’, Toronto Star, January 12.
Malamed, Samantha (2018), ‘Pa. officials spent 8 years developing an algorithm for sentencing. Now, lawmakers want to scrap it’, The Philadelphia Inquirer, December 12.
Meijer, Albert and Wessels, Martijn (2019), ‘Predictive Policing: Review of Benefits and Drawbacks’, International Journal of Public Administration (Volume 42, Issue 12).
Sassaman, Hannah (2020), ‘Covid-19 Proves It’s Time to Abolish ‘Predictive’ Policing Algorithms’, Wired, August 27.
Skeem, Jennifer and Lowenkamp, Christopher (2015), ‘Risk, Race, & Recidivism: Predictive Bias and Disparate Impact’, Government of the United States of America – Administrative Office of the U.S. Courts, November 8.
Stilgherrian (2021), ‘Predictive policing is just racist 21st century cyberphrenology’, The Full Tilt, January 27.
Strikwerda, Litska (2020), ‘Predictive policing: The risks associated with risk assessment’, The Police Journal, August 6.
Stroud, Matt (2014), ‘The minority report: Chicago’s new police computer predicts crimes, but is it racist?’, The Verge, February 19.
Sturgill, Kristi (2020), ‘Santa Cruz becomes the first U.S. city to ban predictive policing’, Los Angeles Times, June 26.
United States Sentencing Commission (2014), ‘Quick Facts Counterfeiting Offences’, United States Sentencing Commission Datafiles, 2010 through 2014.
Van Brakel, Rosamude and de Hert, Paul (2011), ‘Policing, surveillance and law in a pre-crime society: Understanding the consequences of technology based strategies’, In E. De Pauw, P. Ponsaers, W. Bruggeman, P. Deelman, & K. Van der Vijver (Eds.), Technology-led policing, January.
Weaver, Vesla Mae (2018), ‘Why white people keep calling the cops on black Americans’, Vox, May 29. Wright, Henry (2021), Inequality in Stop and Search, ≠, accessed 12/05/2021, [https://www.henrydwright.co.uk/unequal/]