Peer 2 Peer

Clearview AI is Very Problematic

Eye

Clearview AI is a facial recognition database that is commonly leveraged by law enforcement and governments to make tracking down wanted persons much easier. The massive multi-billion photo collection and surveillance tool is a ready-made, hacker-assembled collective of global identities that is designed to track down anyone from shoplifters to murderers. Sounds useful, right? If only the story ended there…

While law enforcement from the FBI, down to local police find this tool to be wildly helpful—Clearview AI is also an open gateway to massive racial profiling, and in doing so, entrenches the already deep-rooted systemic racism that exists everywhere. Here is another problem—the founder of Clearview AI has strong ties to white supremacists—shouldn’t this give law enforcement some pause when it comes to using it?

For law enforcement, it’s a balance of what they regard as the greater evil. They view the database provided by Clearview AI to be extremely useful and in their desire to leverage it, they are likely to look the other way when it comes to its inconvenient origin. Therefore, the racist origins of the system get buried and become part of the new functional system. In other words, racism is baked into the system recipe. It’s also worth noting that the people who compile the photo databases/provide the database service—alt-right extremist groups and phishers—are able to surveil the investigations the database is used for. There is obviously a fundamental error behind the idea that law enforcement is using a tool run by criminals.

“There is no clear line between who is permitted access to this incredibly powerful and incredibly risky tool and who doesn’t have access. There is not a clear line between law enforcement and non-law enforcement.” – Clare Garvie, senior associate at the Center on Privacy and Technology at Georgetown Law School

By definition, extremists and criminals are operating outside the bounds of the law. Having access to Clearview AI, it makes them aware that they are being watched. If they know that they are being watched, they can find better methods of hiding, making it easier for them to skirt authorities. Those who are not privy to this information do not have such a convenient advantage. If those who run Clearview AI have ways of evading it, then the targets will be those who do not—and in a lot of cases those targets (deservingly or not) end up being minorities.

Clearview AI is clearly a rampant violation of privacy and should be banned everywhere, and its database purged. Thankfully due to a privacy probe by Candian officials—Clearview AI is no longer available in Canada and the Royal Canadian Mounted Police are being investigated for utilizing it. It is very concerning that the police show little concern about the privacy violations of the public—after all, that is who they are paid to serve.

Police-work is daunting, nobody denies it. Police (especially those in big cities) can’t keep up with the influx of crime. One of the fastest ways to catch up is by removing procedural delays like identity checks and warrants from the process. Clearview AI provides Police with an essential cheat sheet and a way out with its Holy Grail of a database. So, where exactly does all this detailed information come from? It comes from hackers scraping websites and social media channels like Twitter and Facebook. Why are hackers scraping these sites for photos? Well, they’re not. Actually, they are seeking identity information and credit card numbers. While that information is tougher to obtain—publicly viewable photos, most tagged with identifying names are readily and easily available. It’s almost surprising it took so long for someone to take advantage of it.

“Clearview’s database reportedly includes more than 3 billion images taken from around the web.”

Why aren’t social media platforms alerting their users of this privacy issue via their Terms of Service? Unfortunately, that isn’t the point of the Term of Service—they actually exist to protect social media outlets from liability for displaying users’ data publicly, not to protect our privacy. This means that if your information is scraped by hackers, leading to a false arrest that upends your life—the social media channel it came from is not liable per your signing of their Terms of Service by clicking that ‘I Agree’ button, all those years ago.

This corrupt system is a finely tuned machine that needs to be stopped. Our only recourse is to decentralize social networks. Decentralization would mean that each social network connection is encrypted—therefore not providing large-scale, publicly available content or photos for hackers to scrape. That means no one, aside from who you decide to share with, would be able to see any of your personal content. When there is nothing to acquire by scraping, Clearview AI will have nothing to operate with.

Hate the idea of tech companies having access to your pictures and identity? Decentralize now!

Leave a Reply