You make it sound like some regular employee will be reviewing them. That definitely won’t be the case.
These types of images should not be reviewed by any apple employee.
You make it sound like some regular employee will be reviewing them. That definitely won’t be the case.
None, as that’s not how it works. It’s been covered a lot in this thread already.
You make it sound like some regular employee will be reviewing them. That definitely won’t be the case.
Please point out where Apple have told people WHO will be making the checks, because nothing Apple have released says ANYTHING about using a mystery specialist
So this means Apple employees will get to see people's random photos which get picked up by accident by this scan.
There’s a threshold. So unless you somehow get pinged for a number of false positives which Apple says just one false positive is 1 in a trillion.
Apple says a lot of things, most of it is BS. There is no way they know how many false positives there will be. It WILL be far higher than that though.
The first time they switch it on they will likely get flooded with them.
These types of images should not be reviewed by any apple employee.
The road to hell is paved with good intentions.
Surely this is only a step away from people having their doors booted in at silly o'clock in the morning because some div at Apple deemed the holiday photos of their kids playing on the beach to be 'dodgy'.
I assume the chances of an innocuous photo having the same hash value as a known indecent photo will be very low and the reviewer will confirm 2 things - does the uploaded photo really match the known indecent photo? Is the uploaded photo in itself indecent? And then the authorities will review before deciding whether to prosecute.
I support the move and I hope other cloud services follow suit (if they don’t already do this kind of thing).
The AI is instructed to look for "visually similar" images, so a tot in a bath vs a tot in a bath - one has been abused, one hasn't - the AI can't tell the difference so it'll flag the image. How many "tots in a bath" images do you think parents have across the world?
It's not AI, it uses CSAM hashes. Have a look at the 2nd link in the OP for some more details.
The corporations are WAY worse. Governments are mostly interested in the big picture stuff and crime, corporations want all of your data for their own gains.
Apple will pass this off as helping the government catch criminals, but also spy on people for themselves at the same time.
Where does it say it’s doing an AI context search on photos?
How else are they supposed to do it? IIOC exists on all social media platforms, they need people to be able to take action against it and review flagged material.
Exactly, so let’s wait for details. They are already working closely with NCMEC so I wouldn’t be surprised if these “Apple” employees are no different in training and
vetting to NCMEC employees. But let’s wait and see.
Anything I put in the cloud is encrypted, especially after the anti-gay censorship laws Labour introduced in 2008.
That sounds like you are hiding homophobic images?