You may have seen “Minority Report,” the 2002 film based on the Philip K. Dick novel, which anticipates governmental use of technology to prevent murder before it happens. Today, this is known as pre-crime, and it didn’t take all the way until 2054 — as the Dick novel suggests — for contemporary technology to grow embedded in unconstitutional policing activities as facial recognition (FR) technology.

Now, the Electronic Privacy Information Center (EPIC) and 40 supporting organizations have called for swift action banning FR before it becomes a banal aspect of our daily lives.

FR facilitates a pre-crime mentality; the latest tech targets U.S. law enforcement organizations through companies like Clearview AI, which a New York Times report has recently exposed. This company has lifted upwards of 3 billion social media users’ photos to build a lightning speed searchable database that presumably helps police solve crimes.

Six hundred police departments currently use the FR service.

Facial image scans are run through software, matching real-time faces with stored images to identify people. This works toward police goals, as a murderer caught on camera can be more swiftly apprehended, for example.

EPIC supports a global moratorium on FR and the EU is considering a temporary ban — just as London police and U.S. federal agencies explore its applications. The Department of Homeland Security’s Privacy and Civil Liberties Oversight Board can enact an FR ban, which could influence how police and intelligence agencies engage the technology.

One of the more egregious FR uses comes from Customs and Border Protection's (CPB) Biometric Entry-Exit program. EPIC’s July 2019 memo to DHS outlines CPB usage of FR to monitor, surveil, harass, arrest, and deport via international airports. This is done using CPB and private firm software that has not escaped “technical and operational challenges.”

Mistaken identity due to technical malfeasance is a massive civil liberties concern only exacerbated by already existing discrimination against women and people with darker skin tones. Gender and racial profiling are issues here, as an algorithm can approximate, but not always accurately predict, a suspect’s gender or racial identity as recorded. Imagine an intimidating arrest climate further rendered problematic by misidentifying someone because algorithms cannot adequately capture the substance behind an image.

Amazon licensed its own Rekognition software, which gained early negative press for misidentifying 28 congressional members. White faces are used by white technicians to train the software’s AI — an ongoing problem that could be improved but will persist alongside discrimination.

Clare Garvie, of Georgetown Law School's Center on Privacy and Technology, explains that computers make the same mistakes people do: “...people have a harder time recognizing faces of another race and that ‘cross-race bias’ could be spilling into artificial intelligence. Then there are challenges dealing with the lack of color contrast on darker skin, or with women using makeup to hide wrinkles or wearing their hair differently…”

In the wrong hands, profiling computer systems that share police agency biases can amount to false arrests, incarceration, and deportation — all because of shiny new technology.

Pro-civil liberties groups oppose perfecting an imperfect-by-design technology. If recognition software algorithms become more accurate, with a smaller margin of ID errors for targeted populations, then what? Is this even possible: a fair version of FR? There’s an accelerated bias towards rearresting previous prisoners since they have mugshots available in the facial database search.

Under the FR regime, suspicion trumps social engagement, and the world becomes a stage set for innocent and guilty people.

As if the racial profiling, police brutality, and the post-9/11 surveillance state wasn’t already thick with violation, FR extends police control of public space at your front doorstep, too. Ring is a camera and a doorbell, with applications encouraging footage-sharing with police.

Imagine getting pressured to share footage with police because your doorstep camera caught a carjacking. Imagine saying no to their request for your Ring footage. Imagine company employees sharing your data with police without asking, trading it with them for favors. A seemingly innocent security measure quickly becomes a surveillance state weapon of choice.

FR is a Big Tech-sponsored backlash against growing anti-mass incarceration forces that have been rolling back drug war era policies, changing felony charges to misdemeanors, establishing early release protocols, eliminating cash bail, and funding pretrial diversion programs with rehabilitation and social service support.

When FR targets protesters, they wear masks. Under new technologies, daily life becomes a protest, defending the Fourth Amendment (prohibition of unreasonable search and seizure).

FR creates a surveillance and harassment climate, even when AI bias is corrected. The EU’s General Data Protection Regulation (GDPR) reigns in egregious harvesting, but FR should be abolished, since regulations will favor police department usage and companies will continue to secretly sell and use it.

Meanwhile, a newly proposed Senate FR regulation bill is being criticized for its soft-on-crime, FR-friendly, approach. Containment is the name of the game here, as a presumably democratic process is used to set boundaries on the violations.

Gainesville, Florida; Chicago; and New York City police using the software face lawsuits, while New Jersey’s attorney general has advised state police to cease using FR.

Maybe it’s time to abolish FR before it permanently abolishes our individualized sense of public space.