Predictive Criminology / Policing

Predictive policing refers to the usage of mathematical, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity.Predictive policing methods fall into four general categories: methods for predicting crimes, methods for predicting offenders, methods for predicting perpetrators' identities, and methods for predicting victims of crime.

The technology has been described in the media as a revolutionary innovation capable of "stopping crime before it starts".However, a RAND Corporation report on implementing predictive policing technology describes its role in more modest terms:

Predictive policing methods are not a crystal ball: they cannot foretell the future. They can only identify people and locations at increased risk of crime ... the most effective predictive policing approaches are elements of larger proactive strategies that build strong relationships between police departments and their communities to solve crime problems.

In November 2011, TIME Magazine named predictive policing as one of the 50 best inventions of 2011, using the term "pre-emptive policing".In the United States, the practice of predictive policing has been implemented by police departments in several states such as California, Washington, South Carolina, Alabama, Arizona, Tennessee, New York, and Illinois.

Predictive policing uses data on the times, locations and nature of past crimes, to provide insight to police strategists concerning where, and at what times, police patrols should patrol, or maintain a presence, in order to make the best use of resources or to have the greatest chance of deterring or preventing future crimes. This type of policing detects signals and patterns in crime reports to anticipate if crime will spike, when a shooting may occur, where the next car will be broken into, and who the next crime victim will be. Algorithms are produced by taking into account these factors, which consist of large amounts of data that can be analyzed.The use of algorithms creates a more effective approach that speeds up the process of predictive policing since it can quickly factor in different variables to produce an automated outcome. From the predictions the algorithm generates, they should be coupled with a prevention strategy, which typically sends an officer to the predicted time and place of the crime.The use of automated predictive policing supplies a more accurate and efficient process when looking at future crimes because there is data to back up decisions, rather than just the instincts of police officers. By having police use information from predictive policing, they are able to anticipate the concerns of communities, wisely allocate resources to times and places, and prevent victimization.Predictive policing is an add on to hot spot policing, which is effective and promising in decreasing crime and offenses. Hot spot policing also tends to focus on urban locations, or small areas in general, where crime is high.

Police may also use data accumulated on shootings and the sounds of gunfire to identify locations of shootings. The city of Chicago uses data blended from population mapping crime statistics, and whether to improve monitoring and identify patterns.PredPol, founded in 2012 by a UCLA professor, is one of the market leaders for predictive policing software companies.Its algorithm is formed through an examination of the near-repeat model, which infers that if a crime occurs in a specific location, the properties and land surrounding it are at risk for succeeding crime. This algorithm takes into account crime type, crime location, and the date and time of the crime in order to calculate predictions of future crime occurrences.Another software program that is utilized for predictive policing is operation LASER, which is used in Los Angeles to attempt to reduce gun violence.However, LASER was discontinued in 2019 due to a list of reasons, but specifically because of the inconsistencies when labeling people.Furthermore, some police departments have also discontinued their usage of the program given the racial-biases and ineffective methods associated with it.While the idea behind the predictive policing model is helpful in some ways, it has always had the potential to technologically reiterate social biases, which would inevitably increase the pre-existing patterns of inequality.

In 2008, Police Chief William Bratton at the Los Angeles Police Department (LAPD) began working with the acting directors of the Bureau of Justice Assistance (BJA) and the National Institute of Justice (NIJ) to explore the concept of predictive policing in crime prevention.In 2010, researchers proposed that it was possible to predict certain crimes, much like scientists forecast earthquake aftershocks.

In 2009, the NIJ held its first predictive policing symposium. At the event, Kristina Rose, acting director of the NIJ, claimed that the Shreveport, Los Angeles, D.C. Metropolitan, New York, Chicago, and Boston Police Departments were interested in implementing a predictive policing program.Today, predictive policing programs are currently used by the police departments in several U.S. states such as California, Washington, South Carolina, Arizona, Tennessee, New York and Illinois.Predictive policing programs have also been implemented in the UK and Europe, for example in Kent County Police and the Netherlands.

From 2012, NOPD started a secretive collaboration with Palantir Technologies in the field of predictive policing.According to the words of James Carville, he was impetus of this project and "o one in New Orleans even knows about this".

In China, Suzhou Police Bureau has adopted Predictive Policing since 2013. During 2015–2018, several cities in China have adopted predictive policing.China has used Predictive Policing to identify and target people for sent to Xinjiang re-education camps.

In 2020 the Fourth Circuit Court of Appeals handed down a decision which found predictive policing to be a law-enforcement tool that amounted to nothing more than reinforcement of a racist status quo. The court also held that to grant the government exigent circumstances exemption in this case would be a broad rebuke to the landmark Terry vs Ohio case which set the standard for unlawful search and seizure.Predictive policing, which is typically applied to so-called 'High crime areas' - "relies on biased input to make biased decisions about where police should focus their proactive efforts",and without it police are still able to fight crime adequately in minority communities.

Effectiveness

The effectiveness of predictive policing has been tested through multiple studies with varying findings. In 2015, the New York Times published an article that analyzed predictive policing's effectiveness, citing numerous studies and explaining their results.

A study conducted by the RAND Corporation found that there was no statistical evidence that crime was reduced when private policing was implemented. The study cites that predictive policing is only half of the effectiveness. Carefully executed human action is the second half of its effectiveness. This prediction and execution is highly dependent on the reliability of the input of the data. If the data is unreliable the effectiveness of predictive policing can be disputed.

Another study conducted by the Los Angeles Police Department (LAPD) in 2010, found its accuracy to be twice that of its current practices.In Santa Cruz, California, the implementation of predictive policing over a 6-month period resulted in a 19 percent drop in the number of burglaries.In Kent, 8.5 percent of all street crime occurred in locations predicted by PredPol, beating the 5 percent from police analysts.

A study from the Max Planck Institute for Foreign and International Criminal Law in an evaluation of a 3-year pilot of the Precobs (pre crime observation system) software said no definite statements can be made about the efficacy of the software. The 3-year pilot project will enter a second phase in 2018.

A particular strategy of predictive policing called hot spot policing has had a positive effect on crime.Evidence provided by the National Institute of Justice shows that this method has decreased the frequency of multiple, violent, and drug and alcohol offenses among others.However, without careful execution and sufficient data implementation this method can perpetuate implicit bias and racial profiling.

According to the RAND Corporation study, the quality of data used for predictive policing can be severely insufficient if data censoring, systematic bias, and relevance is deficient. Data censoring is the implementation of data that omits crime in certain areas. Systematic bias can result when data is collected that shows a certain number of crimes, but does not sufficiently report when the crimes took place. Relevance is the usefulness of data that drives predictive policing.

Documentation of these deficiencies have been reported to cause ineffective and discriminatory policing. One specific data collection reported on the “Disproportionate Risks of Driving While Black.” This report showed that black drivers were significantly more likely to be stopped and searched while driving. These biases can be fed into the algorithms used to implement predictive policing and lead to higher levels of racial profiling and disproportionate arrests.

According to the RAND study, the effectiveness of predictive policing requires and depends on the input of data that is high in quality and quantity. Without thoroughly sufficient data, predictive policing results in negative and inaccurate outcomes. Furthermore, it is also cited that predictive policing is inaccurately referred to as the “end of crime.” However, the effectiveness of predictive policing depends fundamentally on the tangible action taken based on predictions.

Criticisms

A coalition of civil rights groups, including the American Civil Liberties Union and the Electronic Frontier Foundation issued a statement criticizing the tendency of predictive policing to proliferate racial profiling.The ACLU's Ezekiel Edwards forwards the case that such software is more accurate at predicting policing practices than it is in predicting crimes.

Some recent research is also critical of predictive policing. Kristian Lum and Isaac William have examined the consequences of training such systems with biased datasets in 'To predict and serve?'.Saunders, Hunt and Hollywood demonstrate that the statistical significance of the predictions in practice verge on being negligible.

In a comparison of methods of predictive policing and their pitfalls Logan Koepke comes to the conclusion that it is not yet the future of policing but 'just the policing status quo, cast in a new name'.

In a testimony made to the NYC Automated Decision Systems Task Force, Janai Nelson, of the NAACP Legal Defense and Educational Fund, urged NYC to ban the use of data derived from discriminatory or biased enforcement policies. She also called for NYC to commit to full transparency on how the NYPD uses automated decision systems, as well as how they operate.

According to an article in the Royal Statistical Society, 'the algorithms were behaving exactly as expected – they reproduced the patterns in the data used to train them' and that 'even the best machine learning algorithms trained on police data will reproduce the patterns and unknown biases in police data'.

In 2020, following protests against police brutality, a group of mathematicians published a letter in Notices of the American Mathematical Society urging colleagues to stop work on predictive policing. Over 1,500 other mathematicians joined the proposed boycott.

Some applications of predictive policing have targeted minority neighborhoods and lack feedback loops.

Cities throughout the United States are enacting legislation to restrict the use of predictive policing technologies and other “invasive” intelligence-gathering techniques within their jurisdictions.

Following the introduction of predictive policing as a crime reduction strategy, via the results of an algorithm created through the use of the software PredPol, the city of Santa Cruz, California experienced a decline in the number of burglaries reaching almost 20% in the first six months the program was in place. Despite this, in late June of 2020 in the aftermath of the killing of George Floyd in Minneapolis, Minnesota along with a growing call for increased accountability amongst police departments, the Santa Cruz City Council voted in favor of a complete ban on the use of predictive policing technology.

Accompanying the ban on predictive policing, was a similar prohibition of facial recognition technology. Facial recognition technology has been criticized for its reduced accuracy on darker skin tones - which can contribute to cases of mistaken identity and potentially, wrongful convictions.

In 2019, Michael Oliver, of Detroit, Michigan, was wrongfully accused of larceny when his face registered as a “match” in the Data Works Plus software to the suspect identified in a video taken by the victim of the alleged crime. Oliver spent months going to court arguing for his innocence - and once the judge supervising the case viewed the video footage of the crime, it was clear that Oliver was not the perpetrator. In fact, the perpetrator and Oliver did not resemble each other at all - except for the fact that they are both African-American which makes it more likely that the facial recognition technology will make an identification error.

With regards to predictive policing technology, the mayor of Santa Cruz, Justin Cummings, is quoted as saying, “this is something that targets people who are like me,” referencing the patterns of racial bias and discrimination that predictive policing can continue rather than stop.

For example, as Dorothy Roberts explains in her academic journal article, Digitizing the Carceral State, the data entered into predictive policing algorithms to predict where crimes will occur or who is likely to commit criminal activity, tends to contain information that has been impacted by racism. For example, the inclusion of arrest or incarceration history, neighborhood of residence, level of education, membership in gangs or organized crime groups, 911 call records, among other features, can produce algorithms that suggest the over-policing of minority or low-income communities..