Apple to Roll Out Photo Scanning to Detect Child Abuse Images
Apple announced it will begin conducting checks on photos to detect images of child sexual abuse before they are uploaded to its iCloud storage service. If images are detected, the company will report the material.
Apple said that the ability to detect child sexual abuse material (CSAM) is one of several features its implementing to protect children from online harm in its iOS 15, iPadOS 15, watchOs8, and macOS Monterey updates later this year. Other features will include blocking potentially sexually explicit images sent and received by children’s iMessage accounts, and interventions when users attempt to search for CSAM-related terms using Apple’s Siri and Search functions.
“At Apple, our goal is to create technology that empowers people and enriches their lives—while helping them stay safe,” Apple said in a press release. “We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of CSAM.”
Many cloud services, including Dropbox, Google, and Microsoft, “already scan user files for content that might violate their terms of service or be potentially illegal, like CSAM,” TechCrunch reported. “But Apple has long resisted scanning users’ files in the cloud by giving users the option to encrypt their data before it ever reaches Apple’s iCloud servers.
“Apple said its new CSAM detection technology—NeuralHash—instead works on a user’s device, and can identify if a user uploads known child abuse imagery to iCloud without decrypting the images until a threshold is met and a sequence of checks to verify the content are cleared.”
Law enforcement agencies and the National Center for Missing and Exploited Children (NCMEC) maintain databases of known CSAM images that are converted into hashes—codes that can be used to identify the image but not reconstruct it.
“When a user uploads an image to Apple’s iCloud storage service, the iPhone will create a hash of the image to be uploaded and compare it against the database,” according to Reuters. “Photos stored only on the phone are not checked, Apple said.”
Apple said it will conduct a manual review of each report of CSAM detected by its NeuralHash method; if the review confirms the report, Apple will disable the user’s account and send a report to the NCMEC and potentially law enforcement. Users who feel that there has been a mistake will be able to file an appeal to have their account reinstated.
Along with rolling out NeuralHash, Apple is also releasing an update that will warn children and their parents when they receive or send sexually explicit photos. Apple said it will use on-device machine learning to analyze image attachments to determine if the image is sexually explicit.
“When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo,” Apple said in a press release. “As an additional precaution, the child can also be told that, to make sure they are safe, their parents will get a message if they do view it. Similar protections are available if a child attempts to send sexually explicit photos. The child will be warned before the photo is sent, and the parents can receive a message if the child chooses to send it.”
The new Apple feature has received praise from the NCMEC, with NBC News reporting that NCMEC President and CEO John Clark called the new provisions a game-changer.
“With so many people using Apple products, these new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material,” Clark said.
Apples moves also received endorsement from some cryptographers, including Benny Pinkas, a cryptographer at Israel’s Bar-Ilan University who reviewed the system and said it solves a very challenging problem of detecting photos with CSAM content while keeping all other photos encrypted and private.
"I believe that the Apple PSI system provides an excellent balance between privacy and utility, and will be extremely helpful in identifying CSAM content while maintaining a high level of user privacy and keeping false positives to a minimum," Pinkas wrote.
Seems like Apple's idea of doing iCloud abuse detection with this partially-on-device check only makes sense in two scenarios: 1) Apple is going to expand it to non-iCloud data stored on your devices or 2) Apple is going to finally E2E encrypt iCloud? https://t.co/zQeVbAekOo— Andy Greenberg (@a_greenberg) August 5, 2021
Others, however, have been less enthusiastic—calling Apple’s decision to adopt this new measure a “slippery slope” that could lead to further privacy violations through surveillance.
“I’m not defending child abuse. But this whole idea that your personal device is constantly locally scanning and monitoring you based on some criteria for objectionable content and conditionally reporting it to the authorities is a very, very slippery slope,” said Nadim Kobeissi, a cryptographer and founder of software firm Symbolic Software, in an interview with WIRED. “I definitely will be switching to an Android phone if this continues.”
In a Twitter thread, cryptographer and associate professor at Johns Hopkins Information Security Institute Matthew Green wrote that with the update Apple is sending a “very clear signal” that it is “safe to build systems that scan users’ phones for prohibited content. That’s the message they’re sending to governments, competing services, China, you.”
So I wrote this previous thread in a hurry and didn’t take time to spell out what it means, and what the background is. So let me try again. https://t.co/OkCgSrApXk— Matthew Green (@matthew_d_green) August 5, 2021
“Whether they turn out to be right or wrong on the point hardly matters,” Green added. “This will break the dam—governments will demand it from everyone. And by the time we find out it was a mistake, it will be way too late.”