Checklist 176: Cloudiness Around Clearview AI
This week on the Checklist, we’ll take a closer look at a facial recognition company that’s making headlines for all the wrong reasons. Checklist 176 is all about privacy, security, and the little startup that may have stolen your face.
Clearview’s class action nightmare
Clearview AI is now facing a class action lawsuit filed on behalf of Illinois citizens — and as Facebook recently discovered, that can get expensive.
Clearview, in case you haven’t heard of it, is a startup that uses facial recognition technology to offer a face matching service to private companies and law enforcement agencies.
The ethics of facial recognition are certainly debatable, but what has really rubbed most people the wrong way are Clearview AI’s data collection practices. The company built its 3 billion image database by scraping people’s photos from social media sites like Facebook, Twitter, and YouTube. The idea is that if police or private investigators have surveillance footage of a suspected criminal, they can then check those images against Clearview AI’s database and try to find a match.
If that sounds creepy and wrong to you, you’re not alone. Facebook, Google, Twitter, and other companies have sent cease and desist letters to Clearview AI, and the startup has been the subject of numerous critical pieces in the press.
This week, however, things escalated for Clearview — in court. Lawyers behind the class action suit are arguing that the company’s actions violate the Illinois Biometric Information Privacy Act (BIPA). Among other things, the law states that companies must obtain an individual’s consent before collecting and storing their biometric data.
Clearview AI says that they are only exercising their First Amendment rights by collecting publicly available photos and sharing them with third parties — but the courts may disagree with this interpretation of the Constitution. Just a little while ago, an Illinois court ruled against Facebook in a similar case — one which was also based on the BIPA — with the proposed settlement set to cost the social media giant in excess of $500 million.
While this court case is still in its early stages, the story is important for anyone concerned with data privacy and the law, and definitely worth keeping an eye on.
So who’s buying all those faces?
Clearview AI suffered another public relations black eye last week, when it came out that the company had experienced a data breach.
The good news — if you can call it that — is that the breach didn’t have to do with any of the biometric data scraped and stored by the company. But while the general public wasn’t impacted, Clearview’s customers weren’t so lucky: Someone was able to access the company’s client list, which has since been leaked to journalistic outlets.
The details of the client list are, to say the least, interesting. Perhaps unsurprisingly, over 600 law enforcement agencies seem to have had some relationship with Clearview in the past (including the Chicago Police Department, who apparently didn’t get the memo about the BIPA). Also making the list were agencies associated with the US Department of Justice, Immigration and Customs Enforcement (ICE), INTERPOL, and foreign governments. But Clearview’s clientele wasn’t limited to government bodies and law enforcement: a large number of private companies had used the service as well, including fitness centers, big-box electronics retailers, grocery chains, pharmacies, and department stores.
Evidently, there’s a market for what Clearview is selling — which is all the more reason to scrutinize the company’s security practices. But their response to the incident doesn’t exactly inspire confidence.
Not much is known about what caused the breach, but a Clearview AI spokesperson released a statement saying that the company’s servers were never accessed and that the relevant vulnerability had already been patched, and remarked that “data breaches are part of life in the 21st century”.
This explanation has left many observers unsatisfied. For one thing, implying that data breaches are an unavoidable facet of modern life seems a bit fatalistic, especially for a company that has collected so many people’s personal data. Secondly, the statement that the company’s servers were unaffected and also that the company was able to patch the vulnerability seems just a little bit suspect. After all, how can a company claim to “patch a vulnerability” on servers that they don’t own? This has led some to wonder whether the “vulnerability” in question may have been misconfigured cloud storage — which would indicate a serious lapse in cybersecurity practices at Clearview.
In the absence of more information, however, this is still just speculation. What is certain is that Clearview AI — and its clients — are worth watching.
Apple brings the banhammer
In yet another bit of bad news for Clearview AI, the company’s iPhone app was blocked by Apple last week. To be clear, this is not to say that the app was removed from the App Store — which, as it turns out, was part of the problem.
Apple has an Enterprise Developer program which allows companies to create employees-only iPhone apps which can be distributed without using the App Store. This can be useful for work-related iOS apps that a company wouldn’t need (or want) to see listed in the App Store for the general public to download.
But Apple’s Enterprise program is not intended to be a way around the App Store’s review process, and there are strict rules about how it can be used — rules which Apple takes very, very seriously. For example, using an Enterprise Developer certificate to distribute these special non-App Store apps to customers, rather than employees, is absolutely prohibited. When Facebook tried this, they actually had their certificate revoked temporarily. This caused all of their legitimate, employees-only apps to fail, leading to chaos at Facebook HQ (apparently, people couldn’t even order their lunches because the app which they used to do this was out of service).
Clearview was found to have violated Apple’s Enterprise Developer policies, and thus their app was blocked. Journalists investigating the story found a beta version of the app in a publicly available cloud storage directory, with instructions for how to install it on an iPhone … which gives us a pretty good idea of what Apple took issue with. Clearview is said to be in discussions with Apple to resolve the matter.
All in all, it’s been a bad couple of weeks for Clearview AI. Time will tell what’s next for the company — and we’ll keep you posted on these stories as they develop.
That’s it for this week’s Checklist, but if you’d like to learn more about security and privacy issues as you wait for the next show, take a look into our vault! All past shows are archived at SecureMac.com/Checklist — and each show has full written notes if you prefer reading to listening. If you want to reach out with a question or suggestion, feel free to write us at Checklist@SecureMac.com.