Apple to scan images for child abuse

  • Thread starter Thread starter LiE
  • Start date Start date
None, as that’s not how it works. It’s been covered a lot in this thread already.

As Apple themselves said "if it's a 'nearly identical or visually similar' hash match it'll be checked"........not an identical match, just a nearly identical or visually similar hash match, meaning that there is an AI doing the work (called NeuralHash) and that the vast majority will NOT be identical matches but every result will still need a human to check, speaking of which...........

You make it sound like some regular employee will be reviewing them. That definitely won’t be the case.

Please point out where Apple have told people WHO will be making the checks, because nothing Apple have released says ANYTHING about using a mystery specialist, highly trained staff who have a mental health support system behind them ready to deal with the trauma of viewing actual child porn. So where are you getting your info?

All Apple have said that after manual review by it's "employees" the results will be passed to the National Center for Missing and Exploited Children (NCMEC), a US only private non-profit organisation with no law enforcement ability.

Apple are boasting that their AI will only produce a false positive once in a Trillion times per year, and I can absolutely 100% say that'll be proven wrong within a year or two and the real false positive percentage will be far higher, at which point Apple will tweak the AI to produce less hits making the whole system virtually worthless anyway. Even Apple, who say "one in a trillion" also say there's a method for users to claim that the one in a trillion AI has made a mistake and locked an account in error and can appeal to get the lock removed, by which point the account has already been passed to the NCMEC.

I really recommend you read this - https://www.apple.com/child-safety/pdf/CSAM_Detection_Technical_Summary.pdf - which describes just how loose Apple have set their hash targets to include that an AI thinks looks similar to abuse images.
 
Please point out where Apple have told people WHO will be making the checks, because nothing Apple have released says ANYTHING about using a mystery specialist

Exactly, so let’s wait for details. They are already working closely with NCMEC so I wouldn’t be surprised if these “Apple” employees are no different in training and
vetting to NCMEC employees. But let’s wait and see.
 
So this means Apple employees will get to see people's random photos which get picked up by accident by this scan.

There’s a threshold. So unless you somehow get pinged for a number of false positives which Apple says just one false positive is 1 in a trillion.
 
There’s a threshold. So unless you somehow get pinged for a number of false positives which Apple says just one false positive is 1 in a trillion.

Apple says a lot of things, most of it is BS. There is no way they know how many false positives there will be. It WILL be far higher than that though.

The first time they switch it on they will likely get flooded with them.
 
Apple says a lot of things, most of it is BS. There is no way they know how many false positives there will be. It WILL be far higher than that though.

The first time they switch it on they will likely get flooded with them.

So Apple don’t know, but you somehow do? Maybe they should hire you :p
 
I thought in the US only the police and the special data center has an exemption to view and store CP images.

They should be forwarding suspected images directly to the police to view. They probably won't do that because I suspect millions of images are going to appear from these scans. We already know when they arrest one of the pedos they usually have thousands of their devices. I think the largest collection one person had was over a million images.

I'm just reading the pdf document @ianh posted. It is hilarious how much these business men try and stifle a conversation with over technical language. Images can already be matched without Apples NeuralHash. Most of the document is aimed at making a case that privacy won't be disturbed. But we know it will by the very nature of what they are trying to do.

According to Apple all of its users are suspected pedos until proven otherwise.
 
Surely this is only a step away from people having their doors booted in at silly o'clock in the morning because some div at Apple deemed the holiday photos of their kids playing on the beach to be 'dodgy'.

It's literally happened years ago in the UK when someone developed a film at Boots and the employer shopped the person so the police because they had photos of their own kids playing out in the back garden in a paddling pool.
 
I assume the chances of an innocuous photo having the same hash value as a known indecent photo will be very low and the reviewer will confirm 2 things - does the uploaded photo really match the known indecent photo? Is the uploaded photo in itself indecent? And then the authorities will review before deciding whether to prosecute.

I support the move and I hope other cloud services follow suit (if they don’t already do this kind of thing).
 
I assume the chances of an innocuous photo having the same hash value as a known indecent photo will be very low and the reviewer will confirm 2 things - does the uploaded photo really match the known indecent photo? Is the uploaded photo in itself indecent? And then the authorities will review before deciding whether to prosecute.

The AI is instructed to look for "visually similar" images, so a tot in a bath vs a tot in a bath - one has been abused, one hasn't - the AI can't tell the difference so it'll flag the image. How many "tots in a bath" images do you think parents have across the world? Even Apple themselves don't believe the "one in a trillion" crap as they've already set-up an appeal system to get locked accounts back, why do that if it's just a "one in a trillion" chance of a mistake being made?

I support the move and I hope other cloud services follow suit (if they don’t already do this kind of thing).

So why haven't YOU posted all your photos for everyone to view, just so that we can all check that you're not a paedo, terrorist, criminal, I mean what are you hiding?

If you don't post every image you own then you MUST have something to hide, mustn't you!

That is effectively what your "I hope other cloud services follow suit" means in reality - guilt til proven innocent, no right to privacy and if you want privacy then you MUST be hiding something - can't you see how wrong that thought process is yet that is exactly what you're asking for?
 
The AI is instructed to look for "visually similar" images, so a tot in a bath vs a tot in a bath - one has been abused, one hasn't - the AI can't tell the difference so it'll flag the image. How many "tots in a bath" images do you think parents have across the world?

Where does it say it’s doing an AI context search on photos?

From everything I’ve read it’s done using a CSAM hash map to compare to other hash maps. There is some AI involved in checking for CSAM hash maps that have been tweaked (cropped, colours tweaked etc).
 
It's not AI, it uses CSAM hashes. Have a look at the 2nd link in the OP for some more details.

I foresee hash collisions causing false positives with this method.

Anything I put in the cloud is encrypted, especially after the anti-gay censorship laws Labour introduced in 2008.

The corporations are WAY worse. Governments are mostly interested in the big picture stuff and crime, corporations want all of your data for their own gains.

Apple will pass this off as helping the government catch criminals, but also spy on people for themselves at the same time.

Government are the bigger threat despite the massive corporate data harvesting, corporations want to use data to make profit, government use your data to arrest and imprison you.
 
Where does it say it’s doing an AI context search on photos?

In Apples document I posted above - specifically "For all images processed by the above system, regardless of resolution and quality, each image must have a unique hash for the content of the image. The main purpose of the hash is to ensure that identical and visually similar images result in the same hash" - so the AI creates a hash based on the image it sees and in this case it only has to be "visually similar" to create a hash which may match the hash of the image of an abused kid, hence my "tot in a bath" example of an abuse image which may be "visually similar" to one lots of people may have of their own kid. A human will know the difference but to an AI, if it's "visually similar" enough then it gets the same hash and Bingo - You're a paedo!
 
Exactly, so let’s wait for details. They are already working closely with NCMEC so I wouldn’t be surprised if these “Apple” employees are no different in training and
vetting to NCMEC employees. But let’s wait and see.

With something like this transparency upfront is important - details like missing information on who might be vetting and what part of the process the on device scanning exists in are critical for privacy reasons and people should be questioning before the implementation not after.
 
That sounds like you are hiding homophobic images?

Quite often with these laws they get used as loopholes to do other things, which is probably intentional. But no one questions them because they would get called "bigots" or some kind of *ist on social media by society's dumb people

A bit like what Apple are doing. Oh we have to scan all of your data, because child protection. If you disgree you must hate children or be a peado!
 
Last edited:
Back
Top Bottom