Apple advert in 2019...
![]()
"Until it needs to be repaired. Then it's ours."
![Cry laughing :cry: :cry:](/styles/default/xenforo/vbSmilies/Normal/cry.gif)
Apple advert in 2019...
![]()
Just to add to this, all you need to do is look at Youtube. The AI, frequently gets things wrong and causes issues for users and creators.
No, I get the argument, I’m not completely naive. But until they actually do something beyond this feature it’s speculation at best though to complete FUD or conspiracy theory’s at worst.
The 'support' network for getting an actual human to investigate a false positive with youtube/facebook/twitter is a nightmare. I highly doubt Apple will do any better. They make billions, but put very little back into proper support.
AI is actually rubbish, it makes a lot of mistakes. Why people would want an AI driving their car I dont know.
Because it isn't looking at it phone, drunk or showing off to the lass
Quite right. It's too busy driving into the sides of buses, mistaking billboards for roads and driving into oncoming traffic at 60mph because the sun was at the wrong angle to have time for all that!![]()
From what I understand there has been a fairly big push to inform teens of the risks (both legal and blackmail) of "sexting" as part of sex education etc.
If your age is what I think it is, there is little chance of you sexting when you were 15, given the phones at the time were big, bulky, very expensive, and didn't IIRC do texts.
“Based on feedback from customers, advocacy groups, researchers, and others, we have decided to take additional time over the coming months to collect input and make improvements before releasing these critically important child safety features,” Apple said in an update posted at the top of a company webpage detailing the device-scanning plans.
wonder why down the line why their experience is less and less ideal.
What are you talking about?
As Apple themselves said "if it's a 'nearly identical or visually similar' hash match it'll be checked"........not an identical match, just a nearly identical or visually similar hash match, meaning that there is an AI doing the work (called NeuralHash) and that the vast majority will NOT be identical matches.
Even Apple, who say a false positive is a "one in a trillion" event also say there's a method for users to claim that the "one in a trillion" AI has made a mistake and locked an account in error and can appeal to get the lock removed, by which point the account has already been passed to the NCMEC and possibly the Police. Even Apple don't believe there own guff!
I thought any matches are manually checked before any action? That should reduce the false positives close to nil.
They'll just change a few words around and sneak it in the backdoor...