Law enforcement in America is at crossroads. It is under heavy scrutiny for its ongoing brutality, budgeting, systemic racism, and the data-driven prevention tools it deploys. According to experts, the last point is a matter of grave concern because it can exacerbate existing racial and economic biases and quietly erode our constitutional rights.

The debate

In the ongoing debate about the effectiveness of predictive policing, experts state that the technology already widely in use by police departments is not yet scientifically proven or peer-reviewed. Instead, the data it generates puts officers in conflict with communities they serve rather than working with them. When first launched, it was supposed to save lives, but the data has worked in other ways, leading cities to reassess their policing contracts and policies.

Mathematicians, academics, and activists have been vocal about the non-efficacy of untested software. They no longer want to collaborate with police departments. They pointed out how we fell for the marketing hype by Silicon Valley tech companies. Now officers are beginning to agree and admit use without understanding or questioning the skewed data.

Understanding the technology

Predictive policing has been defined as mathematical analytics and data to identify and deter potential criminal activity. When rolled out initially, many departments that adopted the technology claimed a lower crime statistic. But over time, the civil and constitutional risks were glaring, just like with automated facial recognition.

There are three primary types of predictive policing:

  • Place-based predictive policing
  • Person-based prediction
  • Group-based prediction

How does it work?

Predictive policing uses computer modeling and data to forecast crime and manage officer deployment per area. Algorithms analyze historical data to predict where certain crimes may occur. One expects the data to be objective.

Unfortunately, with the historical data that they run on, all three types of predictive policing have been used to target low-income neighborhoods, poor people, minorities, and people who are involved in sort of lower-level crimes. Continued use of predictive policing would amplify law enforcement issues, a realization that has led to its ban in some places.

Too early for comfort?

In a 2019 study, researchers from New York University commented on this common fallacy. They proved how the actual police data reflect the policies, practices, biases of a given department along with their political and financial accounting needs. In other words, biased policing will skew data since the models informing those will be biased.

Several studies and audits have found that data-driven policing solutions like predictive policing have not delivered the expected results or produced any benefit. It is heavily dependent on how each department manages its record-keeping, data collection, and communication.

Intellectual boycott

Mathematicians at universities across the country work with police departments to build algorithms, analyze data, and conduct modeling work to help police monitor neighborhoods with higher crime rates. The probability technology uses historical data to customize neighborhood coverage and streamlines the use of police resources.

But according to the letter submitted to the American Mathematical Society on June 15, many now want to sever ties with police departments and boycott predictive policing software projects.

Enacting the ban

After almost a decade of practice, it seems that California police departments are backing away from "predictive policing." They were the first to adopt the policy but are now the first to enact a ban on both predictive policing and the use of facial recognition software.

Some are calling it the “George Floyd effect,” but the process began three years ago. Other cities like New York and Chicago are following suit.

The intelligence-driven technology is not itself harmful but the way it works now is. Mathematicians say that with proper implementation guidelines and with transparency, public audits and community buy-in could definitely lead to better policing.