SecureMac, Inc.

Checklist 242: Expanded Protections and Pushback

August 12, 2021

Apple is introducing CSAM detection in iOS to help protect children. But is on-device scanning a threat to our privacy?

Checklist 242: Expanded Protections and Pushback

Apple wants to keep children safe with on-device CSAM detection and other safety features. But are they opening up a Pandora’s box of privacy threats? We’ll discuss:

What happens on your device

Apple is making three changes designed to fight child sexual exploitation and limit the spread of Child Sexual Abuse Material (CSAM). These changes will soon roll out across Apple’s entire ecosystem.

Here’s a brief explanation of each one:

Guidance in Siri and Search

The first change provides “expanded guidance” in Siri and Search. From now on, Apple’s search tools will offer detailed help to people who want to report CSAM to the authorities.

In addition, Search and Siri will intervene if a user tries to look for CSAM. They won’t allow the search to proceed, and will instead tell the user why their interest in CSAM is harmful, and offer them resources to get help.

Communication tools

The second change has to do with communication. Messages will now warn children and parents when sexually explicit images are sent or received.

If a child is about to receive an inappropriate image, Messages will blur it out and give the child a warning along with resources to get help. The app will also let the child know that it’s OK not to view the image, and tell them that their parents will be informed if they do view it.

If a child attempts to send a sexually explicit image, they’ll receive a warning, and their parents will get an alert if the child does decide to send it.

OK, so how does Messages know which images are sexually explicit and which aren’t? Apple says:

Messages uses on-device machine learning to analyze image attachments and determine if a photo is sexually explicit. The feature is designed so that Apple does not get access to the messages.

According to Apple, this feature will only be offered to accounts that have been set up as families in iCloud on newer OS versions (iOS 15, iPadOS 15, and macOS Monterey). In addition, it is opt-in, and has some limits based on a child’s age. For example, parents of children between the ages of 13 and 17 can’t ask to receive notifications about inappropriate images.

CSAM detection on iOS and iPadOS

This final upcoming change is the one that’s receiving all the attention in the media — and the one that has the privacy experts worried.

This is how Apple describes it:

[N]ew technology in iOS and iPadOS will allow Apple to detect known CSAM images stored in iCloud Photos. This will enable Apple to report these instances to the National Center for Missing and Exploited Children (NCMEC). [The Center] acts as a comprehensive reporting center for CSAM and works in collaboration with law enforcement agencies across the United States.

To be clear, this CSAM detection feature doesn’t mean that Apple is looking at your iCloud photos. Instead, iOS and iPadOS will use a new system of “on-device matching” to check photos for indications of CSAM before they can be uploaded to iCloud. Here’s how it works.

Apple takes a database of known CSAM images maintained by NCMEC, and uses cryptographic hashing to transform the database into hash values. The hashes are then stored on a user’s device.

When the user uploads a new photo to iCloud, that photo is also converted to a hash. The hash of the photo is then compared to the database of CSAM hashes stored on their device. If the two hash values don’t match, your device will create a “cryptographic safety voucher” that has information about the result of the matching test and other data about your image. All of that is then uploaded together to iCloud.

On the other hand, if a device detects a match between the hash of a user’s image and one of the CSAM hashes from the NCMEC database, Apple takes action. The matching is double-checked by an actual human being, and if a match is confirmed, Apple disables the user’s account and files a report with NCMEC.

This feature is not opt-in: It’s mandatory for anyone who uses iCloud. However, a MacRumors piece has Apple on record as saying that “it cannot detect known CSAM images if the ‌iCloud Photos‌ feature is turned off”.

The law of unintended consequences

Apple’s new child safety features seem like a good thing. We all want to protect children, and stop child predators from spreading CSAM.

So why would anyone object to what Apple is doing?

The simple answer is this: If Apple’s new cryptographic detection tool can be used to detect CSAM on a device, it can be used to detect other things too. As privacy watchdog Electronic Frontier Foundation (EFF) puts it in a recent blog post:

All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan … That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change.

In other words, while the current system is only focused on CSAM images, what’s to stop it from being used to flag objectionable political imagery on a device, or text expressing anti-government sentiments?

The other significant thing about Apple’s system is that it’s different from almost everything else that’s out there. As some media observers have noted, tech companies like Facebook, Twitter, Microsoft, and Google have been scanning images for years in an attempt to stop the spread of CSAM. But Apple’s system is different, in that it performs the scanning directly on your device instead of in the cloud.

“We will refuse”

Privacy advocates are worried that Apple’s well-intentioned tech may one day be turned against ordinary users who have nothing to do with CSAM. They’re voicing that concern in the form of an open letter to Apple — and asking the company to stop its rollout of on-device content monitoring.

The letter quotes a number of security and privacy experts. Sarah Jamie Lewis of the Open Privacy Research Society sums up her concerns over Apple’s new CSAM detection tool as follows:

If Apple are successful in introducing this, how long do you think it will be before the same is expected of other providers? Before [the] walled-garden prohibit[s] apps that don’t do it? Before it is enshrined in law? How long do you think it will be before the database is expanded to include “terrorist” content? “harmful-but-legal” content”? state-specific censorship?

Sound farfetched? Security and privacy researcher Nadim Kobeissi doesn’t think so — and points to Apple’s own track record in some countries as evidence:

Apple sells iPhones without FaceTime in Saudi Arabia, because local regulation prohibits encrypted phone calls. That’s just one example of many where Apple’s bent to local pressure. What happens when local regulations in Saudi Arabia mandate that messages be scanned not for child sexual abuse, but for homosexuality or for offenses against the monarchy?

Apple, however, says that this will never happen. In an apparent acknowledgement of its critics’ concerns, the company posted an FAQ in which it answered the question “Could governments force Apple to add non-CSAM images to the hash list?”

Apple’s answer is worth printing in full here:

Apple will refuse any such demands. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.

That sounds good … for the time being. But governments around the world now know that it’s technically possible for Apple to scan a device for illicit content. What happens if one of those governments decides to change its laws, and Apple is legally obligated to comply with a request to scan for other types of content? Will they really refuse, and risk having to exit that market, and give up millions or billions of dollars in sales? Skeptics scoff at the thought, citing Apple’s problematic history in China, or its willingness to help U.S. law enforcement access users’ iMessage backups.

On the other hand, Apple has successfully balanced user privacy with its legal obligations for many years now — and has demonstrated its willingness to fight back in court against government overreach. It’s unclear how this change will impact user privacy in the long term, but optimists may be willing to give Apple the benefit of the doubt.

No matter what you think of Apple’s latest move, this story is an important one to watch. We’ll be doing just that — and bringing you additional developments as they come up.

Do you have a topic you’d like to see discussed on The Checklist? Write to us and let us know!

For past show audio and notes, visit our archive.

Get the latest security news and deals