Innovation is key to modern policing. By leveraging technology, law enforcement can keep communities safer. But a huge question is if this technology is being used correctly.
Calls for police reform are now causing companies and institutions to reconsider these high-tech infrastructures. Civil liberties groups and activists say some of this tech perpetuates police brutality and racial injustice.
Big tech companies like IBM have announced they are getting out of the facial recognition business. Amazon and Microsoft announced a one-year moratorium on facial recognition services for law enforcement.
Revisiting police tech
Many statisticians and tech companies are backing away from assisting law enforcement amidst the talk of ending predictive policing. Some police departments are also dissolving some policing technologies. One of the first states to adopt the policy, California, enacted a ban on both the use of facial recognition software and predictive policing after George Floyd’s killing.
Predictive policing uses artificial intelligence and machine learning algorithms based on past crime data to identify offenders and victims and forecast which areas should be policed more heavily. Given the systemic racism and history of brutality in U.S. policing, critics have warned about racial disparities and biases in police records and crime data, which form the basis of the predictive policing software.
Controversy and unease
U.S. tech firms that serve law enforcement, thus, find themselves in dichotomous waters. The spotlight on racial inequities is burning brighter than ever before, and police technology has landed right in the middle of it all. Under this harsh light of scrutiny, data and police tech companies are evaluating ways to avoid the biases or oversights in their technologies. Many admit that, though their goal was to solve problems and help the police, their technology has created problems for communities.
The formation of Axon’s ethics board in 2018 was an early example of the unease that these technologies and their uses created. Developing facial recognition based on mostly white faces led to racial disparities in accuracy and privacy concerns for all.
Axon has focused on diverse and objective inputs from civil rights advocates and privacy lawyers, as well as law enforcement. Understanding the potential relationship between what sort of biases or oversights are in a product and its practice is vital for the next generation of police tech.
Yet, advocates of police reform are wary of defunding the police. Instead, they want us to revisit police tech that currently bases its predictions on “historic decisions” made by police and others. The other important thing is to keep questioning the efficacy of police technology to ensure accountability and oversight.
Ushering in change
It’s crucial for police tech innovators to be open and transparent about what they make and how it works. Their responsibility starts at the design phase. DataMade recently admitted that its work had contributed to irresponsible policing. It acknowledged that it presented uncritically biased historical data in the past. Now it has announced that it will not build tools or technology supporting policing or incarceration.
Other tech entrepreneurs are considering that changing population demographics and protests have shed a different light on artificial intelligence in predictive policing. They will aim to take these into account and develop new technology that engages more with the masses instead of just with the police. From drones to facial recognition to license plate readers and predictive algorithms, policing technology has infringed on privacy and constitutional protections.
Now, tech companies are focusing on creating front-end accountability for police and reducing the threat to civil liberties. Lawmakers are looking at data and tech to identify police officers with a pattern of misconduct and improve overall accountability.