Apple to scan images for child abuse

  • Thread starter Thread starter LiE
  • Start date Start date
Surely this is only a step away from people having their doors booted in at silly o'clock in the morning because some div at Apple deemed the holiday photos of their kids playing on the beach to be 'dodgy'.

Apple don’t decide on the photos. The database of hash codes are provided by child protection agencies.
 
Please point out where Apple have told people WHO will be making the checks, because nothing Apple have released says ANYTHING about using a mystery specialist

Exactly, so let’s wait for details. They are already working closely with NCMEC so I wouldn’t be surprised if these “Apple” employees are no different in training and
vetting to NCMEC employees. But let’s wait and see.
 
So this means Apple employees will get to see people's random photos which get picked up by accident by this scan.

There’s a threshold. So unless you somehow get pinged for a number of false positives which Apple says just one false positive is 1 in a trillion.
 
Apple says a lot of things, most of it is BS. There is no way they know how many false positives there will be. It WILL be far higher than that though.

The first time they switch it on they will likely get flooded with them.

So Apple don’t know, but you somehow do? Maybe they should hire you :p
 
The AI is instructed to look for "visually similar" images, so a tot in a bath vs a tot in a bath - one has been abused, one hasn't - the AI can't tell the difference so it'll flag the image. How many "tots in a bath" images do you think parents have across the world?

Where does it say it’s doing an AI context search on photos?

From everything I’ve read it’s done using a CSAM hash map to compare to other hash maps. There is some AI involved in checking for CSAM hash maps that have been tweaked (cropped, colours tweaked etc).
 
How easy is it to fake an image hash?

like can I make a picture of something (or just random static). Have a similar hash to a known abuse image?


Could be an ace way to get access to anyone’s phone or discredit them just send them an innocus image that flags the system

Have a read of this: https://inhope.org/EN/articles/what-is-image-hashing

Image hashing is the process of using an algorithm to assign a unique hash value to an image. Duplicate copies of the image all have the exact same hash value. For this reason, it is sometimes referred to as a 'digital fingerprint'.


How is image hashing used in the fight against Child Sexual Abuse Material?
Hashing is a powerful tool used by hotlines, Law Enforcement, Industry and other child protection organisations in the removal of Child Sexual Abuse Material (CSAM). This is because it enables known items of CSAM to be detected and removed without requiring them to be assessed again by an analyst.

Because we know that once CSAM exists online it is often shared thousands of times, using hashing technology has an enormous impact. It reduces the workload and emotional stress for analysts and law enforcement of reviewing the same content repeatedly, and reduces the harm to the victim by minimizing the number of people who witness the abuse.


What about if the image is edited?
In earlier versions of hashing technology, if an image underwent very minor alterations, such as being cropped or changed to black and white, then each edited version of the image would be assigned a different hash value. This made using the technology to help remove known CSAM much less effective.

However, in 2009, Microsoft in collaboration with Dartmouth College developed PhotoDNA. PhotoDNA uses hash technology but with the added ability that it 'recognises' when an image has been edited so still assigns it the same hash value. Learn more about PhotoDNA here.


Does image hashing affect my privacy?
No. Many platforms use hash technology to detect known CSAM in order to remove it from their platforms. This does not violate users' privacy because the technology only detects matching hashes and does not 'see' any images which don't match the hash. Hash values are also not reversible, so cannot be used to recreate an image.

Learn more about how Artificial Intelligence is used in the fight against CSAM here.
 
Context - are hundreds of images on google images of ... Never mind, I don't think I'd be allowed to discuss it here.

I take an image from google of said practice and say 'this is bad and needs to be banned'.

And apparently I get called a pedo.

The images that would flag are those provided by the NCMEC and their CSAM database. I would be surprised if images found on google would be in this database, especially since the majority of images are already being scanned using this technique (photoDNA).
 
A parent takes a photo of their underage child, it triggers the hash and is reviewed by an Apple employee.

the employee quickly goes “nope just a family photo not abuse” and dismisses it.

does that child when they become an adult have the right to sue Apple and the employee for viewing them naked without their consent?

That isn't how the technology works though. The hash is a unique digital fingerprint for an exact photo that NCMEC have added to their hash database.
 
Apple has published an FAQ around this.

https://www.macrumors.com/2021/08/09/apple-faq-csam-detection-messages-scanning/

From the document, around CSAM.

CSAM detection

Does this mean Apple is going to scan all the photos stored on my iPhone?

No. By design, this feature only applies to photos that the user chooses to upload to iCloud Photos, and even then Apple only learns about accounts that are storing collections of known CSAM images, and only the images that match to known CSAM. The system does not work for users who have iCloud Photos disabled. This feature does not work on your private iPhone pho- to library on the device.

Will this download CSAM images to my iPhone to compare against my photos?

No. CSAM images are not stored on or sent to the device. Instead of actual images, Apple uses unreadable hashes that are stored on device. These hashes are strings of numbers that repre- sent known CSAM images, but it isn’t possible to read or convert those hashes into the CSAM images they are based on. This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations. Using new applications of cryptography, Apple is able to use these hashes to learn only about iCloud Photos accounts that are storing collections of photos that match to these known CSAM images, and is then only able to learn about photos that are known CSAM, without learning about or seeing any other photos.

Why is Apple doing this now?

One of the significant challenges in this space is protecting children while also preserving the privacy of users. With this new technology, Apple will learn about known CSAM photos being stored in iCloud Photos where the account is storing a collection of known CSAM. Apple will not learn anything about other data stored solely on device.

Existing techniques as implemented by other companies scan all user photos stored in the cloud. This creates privacy risk for all users. CSAM detection in iCloud Photos provides signifi- cant privacy benefits over those techniques by preventing Apple from learning about photos unless they both match to known CSAM images and are included in an iCloud Photos account that includes a collection of known CSAM.


Can the CSAM detection system in iCloud Photos be used to detect things other than CSAM?

Our process is designed to prevent that from happening. CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations. This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations. There is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. As a result, the system is only designed to report photos that are known CSAM in iCloud Photos. In most countries, in- cluding the United States, simply possessing these images is a crime and Apple is obligated to report any instances we learn of to the appropriate authorities.

Could governments force Apple to add non-CSAM images to the hash list?

Apple will refuse any such demands. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-man- dated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limit- ed to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.

Can non-CSAM images be “injected” into the system to flag ac- counts for things other than CSAM?

Our process is designed to prevent that from happening. The set of image hashes used for matching are from known, existing images of CSAM that have been acquired and validated by child safety organizations. Apple does not add to the set of known CSAM image hashes. The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under our design. Finally, there is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. In the unlikely event of the system flagging images that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.

Will CSAM detection in iCloud Photos falsely flag innocent people to law enforcement?

No. The system is designed to be very accurate, and the likelihood that the system would incor- rectly flag any given account is less than one in one trillion per year. In addition, any time an ac- count is flagged by the system, Apple conducts human review before making a report to NCMEC. As a result, system errors or attacks will not result in innocent people being reported to NCMEC.
https://www.apple.com/child-safety/...s_for_Children_Frequently_Asked_Questions.pdf
 
Just out of interest how goes Google handle images uploaded to Google photos? We know they analyse the entire image and not just a hash. Do they scan for known CSAM?

Google, MS etc are already doing this with their Cloud services. MS provide an API for it called photoDNA.

Apple have taken their time and implemented a number of additional layers to protect user privacy.
 
Back
Top Bottom