Apple to scan images for child abuse

  • Thread starter Thread starter LiE
  • Start date Start date
Apple advert in 2019...

1tJXPez.jpg

"Until it needs to be repaired. Then it's ours." :cry:
 
Just to add to this, all you need to do is look at Youtube. The AI, frequently gets things wrong and causes issues for users and creators.

The 'support' network for getting an actual human to investigate a false positive with youtube/facebook/twitter is a nightmare. I highly doubt Apple will do any better. They make billions, but put very little back into proper support.
 
No, I get the argument, I’m not completely naive. But until they actually do something beyond this feature it’s speculation at best though to complete FUD or conspiracy theory’s at worst.

I'm sorry, but I do think you are being naive. When they do start something beyond this feature it will be too late. Look how the police have extended various laws like RIPA. History shows that power will be abused unless constrained. The time to stop this abuse is now, before it gets started.
 
The 'support' network for getting an actual human to investigate a false positive with youtube/facebook/twitter is a nightmare. I highly doubt Apple will do any better. They make billions, but put very little back into proper support.

Most of them the time they don't. The system just instantly denies the appeal, unless you're someone famous as they do support them.

The amount of humans they would need to hire to properly moderate these sites is financially unviable, there are 100s of millions of users and someone who gets banned can simply go and make another account. Governments bang on at them about properly moderating it, but it's never going to happen.
 
Because it isn't looking at it phone, drunk or showing off to the lass

Quite right. It's too busy driving into the sides of buses, mistaking billboards for roads and driving into oncoming traffic at 60mph because the sun was at the wrong angle to have time for all that! :p
 
Quite right. It's too busy driving into the sides of buses, mistaking billboards for roads and driving into oncoming traffic at 60mph because the sun was at the wrong angle to have time for all that! :p


Tbf humans do all of ti's things to!
 
From what I understand there has been a fairly big push to inform teens of the risks (both legal and blackmail) of "sexting" as part of sex education etc.

If your age is what I think it is, there is little chance of you sexting when you were 15, given the phones at the time were big, bulky, very expensive, and didn't IIRC do texts.

Nokia 3310, free gprs chat rooms.

And actually nvm, it was when I was 16+ during my A levels, I misremembered.
 
Curiouser and curiouser. The international privacy/rights community lashing out at Apple over this. Not only has the 'neural hashing algorithm' (which Apple confirm will flag not only exact matches but 'similar' matches - there's no such thing as cryptographically 'similar') been reverse engineered and discovered to have actually already been present since iOS 14.3 (and macOS from a similar time), but we already have the first hash collisions being found. Dire.
 
It looks like Apple are backing off from this;

https://time.com/6095103/apple-child-safety-delay/

“Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features,” Apple said in an update posted at the top of a company webpage detailing the device-scanning plans.
 
What are you talking about?

As I replied to you earlier regarding possible consequences for law abiding people -

As Apple themselves said "if it's a 'nearly identical or visually similar' hash match it'll be checked"........not an identical match, just a nearly identical or visually similar hash match, meaning that there is an AI doing the work (called NeuralHash) and that the vast majority will NOT be identical matches.

This bit is still relevant even if Apple do tweak the way this works -

Even Apple, who say a false positive is a "one in a trillion" event also say there's a method for users to claim that the "one in a trillion" AI has made a mistake and locked an account in error and can appeal to get the lock removed, by which point the account has already been passed to the NCMEC and possibly the Police. Even Apple don't believe there own guff!
 
I thought any matches are manually checked before any action? That should reduce the false positives close to nil.
 
Apple should have just been forthcoming in the details and the process rather than skirting around it all shouting "think of the children!!!11" adding to confusion and making it all appear sketchy, especially around privacy (oh the irony).

I thought any matches are manually checked before any action? That should reduce the false positives close to nil.

If that is the case then it could be taken as invasion of privacy.

They'll just change a few words around and sneak it in the backdoor...

It would backfire spectacularly and massively tarnish their privacy conscious image considering the amount of eyes on their source code.

I could see them going for round two but it'll need to be better handled a hell of better than what they did.
 
Back
Top Bottom