Apple to scan images for child abuse

  • Thread starter Thread starter LiE
  • Start date Start date
The association comes from the fact it’s more likely to be anal in gay men than vaginal fisting.

it carries increased infection risks due to perforation that’s why it’s asked at sexual health screenings for gay men but not for straight men and women.

straight people fist each other probably in higher numbers than gays due to the percentage differences


Like how white people are more likely to be “anything” in the uk than black people

It's a mans vagina.
 
So again you want private employees looking at innocent pictures of people’s children.

Why would they need to do that? According to the technical document from Apple, an account requires multiple hits against the CSAM database before anything is flagged for human intervention.
 
Oh wow! Really?! :eek:

Not like legally.

I posted 'religious' circumcision pics found on google search on one of those 'free speech' sites calling it out as a bad thing (I.E the type that causes babies to die from STDs). The pics of such were found from normal news articles calling out the practice as such and the whole thing being advocating to stop it.

So I get banned for posting 'pedophilia'.
 
That isn't how the technology works though. The hash is a unique digital fingerprint for an exact photo that NCMEC have added to their hash database.

Matching a "unique digital fingerprint for an exact photo" is not what's being proposed. That's a misdirection at best and a lie at worst. I've no doubt a lot of people will fall for it. You probably have.

Note that Apple themselves have carefully not explictly made the claim you have made for Apple's plan. Strongly implied it (i.e. misdirection) but not stated it (i.e. a lie). I've no doubt Apple's lawyers are very well aware of the legal difference between misdirection and lying.

Here's one example of what Apple intend their plan to do:

Person A is 17. Person B is 17. A and B are both citizens of the UK, live in the UK and are in a relationship in the UK. A and B both have iphones. A sends B a picture of themself in their underwear. A new picture. One they've just taken. One which they intend to be private between them and their partner. Under Apple's plan, Apple's algorithms would be scanning B's phone, would detect that photo, would detect it as sexual and would block it. They might inform B's parents. They might inform the police, since strictly speaking that would be child porn under UK law and probably also under USA law (i.e. where Apple are based).

Another example of what Apple intend their plan to do:

Person A is 17. Person B is 17. A and B are friends. A is on holiday with their family somewhere hot. They're on a beautiful beach in bright sunshine in a hot place, enjoying their holiday. They're dressed appropriately for the circumstances - swimwear/beachwear and enough sunscreen taking into account how much melanin is in their skin. They take a selfie and send it to their friend, B. "Having a great time, look at this fabulous beach!" Under Apple's plan, Apple's algorithms would be scanning B's phone, would detect that photo, would detect it as sexual and would block it. They might inform B's parents. They would probably release the photo weeks or months later when an Apple employee was available to evaluate it and decided it wasn't sexual. Or maybe never, given how many images would be flagged under Apple's plans. There's no way Apple would employ enough people to evaluate the flagged images in a timely way.

That's under what Apple has publically stated as being the current intention and scope of the plan. It doesn't even touch on intentions not currently made public or on future function creep. Just what Apple are currently publically stating to be their intentions.
 
Matching a "unique digital fingerprint for an exact photo" is not what's being proposed.

That’s what Apple are proposing. Can you point me to where they are lying and actually doing something different?
 
That’s what Apple are proposing. Can you point me to where they are lying and actually doing something different?

That's not what Apple are proposing. Can you point me to a statement from Apple saying that's what they are proposing and all that they're proposing? Because such a statement would be a lie and would be legally actionable. Other people might misunderstand Apple's proposals and thus not be lying about them, but Apple couldn't claim that.

Both the examples I gave are accurate under Apple's public statements about their intentions.
 
The association comes from the fact it’s more likely to be anal in gay men than vaginal fisting.

it carries increased infection risks due to perforation that’s why it’s asked at sexual health screenings for gay men but not for straight men and women.

Where is the statistical data showing actual harm though, where are the hospital admissions? The legislation is supposed to be preventing porn depicting serious injury, which there appears to be no clinical evidence for.

I don't know what clinics you have been to, but I have never been asked such a question at sexual health screenings.

This legislation is a blight on liberal democracy and is a tragic legacy of the last Labour government.
 
Not like legally.

I posted 'religious' circumcision pics found on google search on one of those 'free speech' sites calling it out as a bad thing (I.E the type that causes babies to die from STDs). The pics of such were found from normal news articles calling out the practice as such and the whole thing being advocating to stop it.

So I get banned for posting 'pedophilia'.
Wow!
 
I know, the true child abusers are the Jews and Muslims practicing such horrific acts! How can such religious fanatic groups expect to be accepted into modern western society when they brutalise children?

Shows you the priorities of these corporations.
 
Last edited:

If curious, google search for 'metzitzah b'peh'.

You might end up on a list if you do though, I don't know.

Also while I was banned on one such site, I am aware of two others also banning users for it, claiming that it classifies as CP.

FYI people can already freely post about this including images of it on Reddit, Facebook and Twitter (google searching brings up images from all three), but you pretty much can't say much of anything else there.

Oh and ok, I just remembered my first permaban on Reddit was for writing 'We need to bring (naughty German man) back' in a thread showing an image of and complaining about this practice.

Its funny, the left wing sites allow the pics but not anti discussion of it, while right wing sites claim to allow any discussion, but users that post the pics for the same reason to rally people against it get banned.

In summary, the internet is ****.
 
Last edited:
I know, the true child abusers are the Jews and Muslims practicing such horrific acts! How can such religious fanatic groups expect to be accepted into modern western society when they brutalise children?

Shows you the priorities of these corporations.


And generic Americans who are routinely circumcised for appearance
 
Apple has published an FAQ around this.

https://www.macrumors.com/2021/08/09/apple-faq-csam-detection-messages-scanning/

From the document, around CSAM.

CSAM detection

Does this mean Apple is going to scan all the photos stored on my iPhone?

No. By design, this feature only applies to photos that the user chooses to upload to iCloud Photos, and even then Apple only learns about accounts that are storing collections of known CSAM images, and only the images that match to known CSAM. The system does not work for users who have iCloud Photos disabled. This feature does not work on your private iPhone pho- to library on the device.

Will this download CSAM images to my iPhone to compare against my photos?

No. CSAM images are not stored on or sent to the device. Instead of actual images, Apple uses unreadable hashes that are stored on device. These hashes are strings of numbers that repre- sent known CSAM images, but it isn’t possible to read or convert those hashes into the CSAM images they are based on. This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations. Using new applications of cryptography, Apple is able to use these hashes to learn only about iCloud Photos accounts that are storing collections of photos that match to these known CSAM images, and is then only able to learn about photos that are known CSAM, without learning about or seeing any other photos.

Why is Apple doing this now?

One of the significant challenges in this space is protecting children while also preserving the privacy of users. With this new technology, Apple will learn about known CSAM photos being stored in iCloud Photos where the account is storing a collection of known CSAM. Apple will not learn anything about other data stored solely on device.

Existing techniques as implemented by other companies scan all user photos stored in the cloud. This creates privacy risk for all users. CSAM detection in iCloud Photos provides signifi- cant privacy benefits over those techniques by preventing Apple from learning about photos unless they both match to known CSAM images and are included in an iCloud Photos account that includes a collection of known CSAM.


Can the CSAM detection system in iCloud Photos be used to detect things other than CSAM?

Our process is designed to prevent that from happening. CSAM detection for iCloud Photos is built so that the system only works with CSAM image hashes provided by NCMEC and other child safety organizations. This set of image hashes is based on images acquired and validated to be CSAM by child safety organizations. There is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. As a result, the system is only designed to report photos that are known CSAM in iCloud Photos. In most countries, in- cluding the United States, simply possessing these images is a crime and Apple is obligated to report any instances we learn of to the appropriate authorities.

Could governments force Apple to add non-CSAM images to the hash list?

Apple will refuse any such demands. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups. We have faced demands to build and deploy government-man- dated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future. Let us be clear, this technology is limit- ed to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it. Furthermore, Apple conducts human review before making a report to NCMEC. In a case where the system flags photos that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.

Can non-CSAM images be “injected” into the system to flag ac- counts for things other than CSAM?

Our process is designed to prevent that from happening. The set of image hashes used for matching are from known, existing images of CSAM that have been acquired and validated by child safety organizations. Apple does not add to the set of known CSAM image hashes. The same set of hashes is stored in the operating system of every iPhone and iPad user, so targeted attacks against only specific individuals are not possible under our design. Finally, there is no automated reporting to law enforcement, and Apple conducts human review before making a report to NCMEC. In the unlikely event of the system flagging images that do not match known CSAM images, the account would not be disabled and no report would be filed to NCMEC.

Will CSAM detection in iCloud Photos falsely flag innocent people to law enforcement?

No. The system is designed to be very accurate, and the likelihood that the system would incor- rectly flag any given account is less than one in one trillion per year. In addition, any time an ac- count is flagged by the system, Apple conducts human review before making a report to NCMEC. As a result, system errors or attacks will not result in innocent people being reported to NCMEC.
https://www.apple.com/child-safety/...s_for_Children_Frequently_Asked_Questions.pdf
 
I don't know if someone can answer this question but by all accounts they are taking a hash of known bad images and comparing them to peoples photos on iCloud. They are not "scanning" your photos for content, they are simply using an algorithm that has a database of hashes for known bad images and compares the hashes of your images to those. They are not doing any image processing per se or learning / guessing.

As someone else pointed out however. This is in theory a fine idea. You can have whatever smut you want on your phone as long as its not a known dangerous image. Those photos of yourself or your partner naked won't show up. The problem is that it sounds like depending on the country, that database could be anything. If China decided that they wanted to add all photos of winnie the poo to the database then they could. Anyone with that image on their iCloud would be flagged up and potentially arrested.

If thats the case, it doesn't matter what technology they are using, this should be illegal. Giving anyone the power to decide that any arbitrary image is "dangerous" is very very bad. I don't know who I would trust to curate that database. Certainly not any government.
 
Back
Top Bottom