Apple to scan images for child abuse

  • Thread starter Thread starter LiE
  • Start date Start date

LiE

LiE

Caporegime
Joined
2 Aug 2005
Posts
25,952
Location
Milton Keynes
https://www.macrumors.com/2021/08/06/snowden-eff-slam-plan-to-scan-messages-images/

What are people's thoughts on this?

Mac Rumours is very biased towards a certain privacy view being US-centric. Snowden doesn't like it, thinks Apple is going to abuse it.

Personally I'm OK with this. The intentions are good, the technical implementation looks to maintain privacy and I trust Apple to make it work.

More details about Apple's plan:

https://www.macrumors.com/2021/08/05/apple-new-child-safety-features/
 
Details seem sketchy. I'd want to know how they define the boundaries between child porn and just sending pics of your kids to family / friends.

I would hazard a guess there's going to be some kind of AI based photo recognition going on. How did they train the model, what happens when something it doesn't like show up ? Are there human moderators for review ? At what stage do they involve law enforcement ?

It's not AI, it uses CSAM hashes. Have a look at the 2nd link in the OP for some more details.
 
Of course Apple will abuse it, people are sleep walking into living in the PRC under the guise of safety, the data will be abused by Apple and child abusers will just use a different platform if they aren't already.

If you take that stance, you could quite easily say they are already abusing. If you define the trust level as that low, then you must assume they are already doing shady stuff behind closed doors.

Apple is all about privacy right now, it's a huge part of their business model.
 
It's just an excuse to scan and monitor everything.

"Child abuse" is a get out of jail free card for the companies and government to clamp down the monitoring.

It's not really scanning though is it? It's checking specific known image hashes.

Apple said its method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, Apple said the system will perform on-device matching against a database of known CSAM image hashes provided by the NCMEC and other child safety organizations. Apple said it will further transform this database into an unreadable set of hashes that is securely stored on users' devices.
 
How does this system work for nude images? In another source I read it said they will be looking at images and giving notifications out.

I think you are referring to this, this is on device and only applies to children.

Communication Safety
First, the Messages app on the iPhone, iPad, and Mac will be getting a new Communication Safety feature to warn children and their parents when receiving or sending sexually explicit photos. Apple said the Messages app will use on-device machine learning to analyze image attachments, and if a photo is determined to be sexually explicit, the photo will be automatically blurred and the child will be warned.

When a child attempts to view a photo flagged as sensitive in the Messages app, they will be alerted that the photo may contain private body parts, and that the photo may be hurtful. Depending on the age of the child, there will also be an option for parents to receive a notification if their child proceeds to view the sensitive photo or if they choose to send a sexually explicit photo to another contact after being warned.

Apple said the new Communication Safety feature will be coming in updates to iOS 15, iPadOS 15 and macOS Monterey later this year for accounts set up as families in iCloud. Apple ensured that iMessage conversations will remain protected with end-to-end encryption, making private communications unreadable by Apple.
 
These things are always initially introduced with good intentions and in a way that no-one can object. Then the scope expands. In this case I expect something along the lines of...

* Scanning cloud images for child abuse. No-one can object to that, right?
* Then extend it to scan for terrorism related issues. No-one can object to that, right?
* Extend functionaility to scan images at the phone level itself, because it prevents paedophiles and terrorists avoiding the tool by not uploading to the cloud. If you have nothing to hide, right?
* Extend it to scan for images for other crimes. It's only a small change. Some people might object but shame on them if they do.
* All images scanned for whatever is deemed public benefit. It's only a small extra step and you're already being scanned so why object?

Apple could do that today, they have the machine learning already.
 
This is a very slippery slope and Apple should not have a backdoor to scan any of my photos, I have nothing to hide but still don't like it.

I thought Apple liked privacy, I guess not. If Apple has the means to identify child abuse images on a users device then that means they have the ability to track the images back to the original device/account. This is one hell of a big backdoor that can very easily be abused by governments if they force Apple to give them the ability to use it.

As mentioned above, Apple already have the tech to scan images, yet they don't.

CSAM hashes isn't scanning images, it's literally looking at hash codes, very low level stuff.
 
Surely the only people worried about this are people with child abuse images on their phones?

I really have no issue with it at all.

Nope, those who are the most vocal are those who think Apple are going to use this as a catalyst to scan their entire library and report them to the police for not shopping at the government approved supermarkets... you get the idea.

They forget they could do that today already if they wanted.
 
That's a fair point and I have no objection to that (my point about vulnerabilities notwithstanding).

If you're a paedo muppet then how likely would you be to upload those images? Surely you'd just store them offline somewhere though.

It's done on device, so not in the cloud.
 
Yep, I know. I'm just messing.

:)

It's consistent with every other safety measure introduced post-9/11(ish), just a further expansion of the total surveillance society, introduced one brick at a time so as not to wake up the sheep up. "There's one or two bad apples and to catch/stop them the rest of you have to give up x, y and z." Works every time.

The internet about 20 years ago was free and open but is now nothing more than a tool used for surveilling everyone.

/tinfoil

There will be false positives all over the place. Apple will abuse it and OFC it will get hacked and used as a backdoor for some malware.

That's not how this works at all. https://www.macrumors.com/2021/08/05/apple-new-child-safety-features/
 
Doesn't bother me on the online side if they automatically scan cloud data on the cloud side - but as soon as you start enabling such activity on the client side it is a slippery slope where it becomes hard to argue against increasingly wider measures and once you start letting that become normal it will eventually be abused even with good intentions.

Does it though? I see many people stating that if they do A then it will lead to XYZ, but where has this happened in Apple's history?

They are implementing a feature that they themselves have publicised, everything else that it may lead to is conjecture.
 
You seem to be burying your head in the sand and going full steam ahead with rose-tinted glasses on even when it has been pointed out to you that Apple aren't so forthright in terms of privacy and how past history has shown how other tools "for good" have been abused.
You seem to be displaying full fanboism rather than any common-sense :confused:

Where has that been demonstrated?

I'm just not easily convinced that Apple has ill intentions with this. I am not as cynical as some here.
 
Back
Top Bottom