*** The Official iOS 15 Thread ***

How have you found it? I was considering going for it but heard a few horror stories.

I'm not a developer, I have always run (and risked) using the public betas though. Compared to previous beta releases, the current public beta seems very stable. If you do install the beta, make sure everything is backed up first, and remember that some apps may not work. In my case not being able to use the HSBC mobile banking app is proving to be troublesome, fortunately I have a second iPhone for work and have just installed it on there until it's updated. I did anticipate this though.

Also, you may find things get broken/fixed again/changed as you work through the releases and they will become quite frequent as RTM gets closer.

If you need to roll back to iOS 14, it would be through a restore, unfortunately there is no in-situ downgrade option.
 
Last edited:
it does seem a minor upgrade, seems that's the approach they are taking across the product software line this year...
 
The upcoming new CSAM scanning of iOS devices (US only so far) is a terrible decision from a company that claims to value privacy. "What happens on iPhone stays on iPhone (unless we decide one of our 'approved' moderators needs to check out that image of your toddler in the bath)"

EDIT - where can a privacy conscious individual turn now for a device that can't be scanned at will by big tech on the demand of government?
 
It looks like it's comparing the hash of images uploaded to iCloud with hashes of images known to contain child abuse.

What happens on your iPhone stays on your iPhone. If you have pictures of child abuse on your phone and you upload them to iCloud, they'll get flagged. I don't see a problem with this.

more info
 
It looks like it's comparing the hash of images uploaded to iCloud with hashes of images known to contain child abuse.

What happens on your iPhone stays on your iPhone. If you have pictures of child abuse on your phone and you upload them to iCloud, they'll get flagged. I don't see a problem with this.

more info

There is also on-device scanning as far as I'm aware, just using a different method. Regardless, the matching is done by AI and whatever Apple says about its accuracy, something will go wrong and an innocent party will have their photos viewed (without their knowledge or agency) by someone employed by Apple.

Also, what happens if this is extended to any crime? What happens if you attend a pride march, or read literature on Chinese dissidents and then fly Saudi or China on holiday, only for the local governments to demand from Apple details of 'matched' photos on your Apple device? I also read that it's possible to copy the hash of an image onto a perfectly 'innocent' photo and disseminate that to e.g. people who you might not like or might want to draw attention to.

I don't see how this is in any way a positive.
 
It's the modern equivalent of someone taking their film to Boots to be developed, someone spots a picture of kiddy fiddling and calls the police or Gary Glitter taking his computer to PC World, them spotting child porn and getting him arrested. I don't see a problem with it.

You totally ignored all the other points I mentioned in favour of the "won't somebody think of the children" argument.
 
You totally ignored all the other points I mentioned in favour of the "won't somebody think of the children" argument.
Because "what happens if this is extended to any crime?" is irrelevant here. It's something plucked out of thin air, like all the rumours we see every year about new Apple devices.

The only thing that they're checking for is child abuse, why wouldn't I focus on that? The whole thing is about the "won't somebody think of the children" argument.
 
The local scanning for nude images is kept local to the device, it tells the sender/recipient are you sure you want to do this. Then if they are under 13 it also says if you view or send this, your parent will be notified if enabled. I see no problem with this as this as it's all local for the AI based detection.

The other part is scanning photos being uploaded to iCloud where it compares against known child images and if a match, then alerts appropriately. You can just turn of iCloud Photos to get past this if you don't want big brother scanning your images. Again only if going to the cloud and against known images. All the big boys do this already, Microsoft, Gmail, Facebook, Instagram, Youtube etc all scan videos and images you upload to the web, analyse them and report.

Sure the technology exists to extend this out to other crimes in the future, but this technology for scanning against known has existed for 15+ years and is already widely used. If some party/government/entity wanted to abuse this to cover other things, they could if they wanted to anyway, we can't stop them even if we wanted to.
 
Again only if going to the cloud and against known images.

The scanning has been described as 'fuzzy', i.e. it has to be able to catch modified versions of known images. In other words, the scanning isn't exact and that means it will, eventually, pick up the wrong image and send it to a team in Apple to look at.
 
It's no surprising and this was the next logical step for Apple (or anyone) to "Scan" (read, snoop on) personal devices as they backed down on encrypting iCloud and hand out backups to agencies, were already scanning them for illicit material (same as other providers to be fair) and they have a lot of pressure from agencies and governments to "do more".
Problem is, it sets precedence for other providers to go down the same route so i imagine Google/Microsoft et all won't be far behind.

Two massive questions that i can see are what happens when something is flagged (rightly or wrongly) - do Apple then remotely download the source material (privacy issues) and manually check, or do they just go straight to alerting authorities resulting in people getting knocks on the door?

Secondly, what happens when a country demands for the scope to be changed (in the same way countries demand access to servers/infrastructure) to include, for example, propaganda material - will Apple roll over like they did with iCloud backups?

It's the modern equivalent of someone taking their film to Boots to be developed, someone spots a picture of kiddy fiddling and calls the police....

And there's been many, many false accusations (famously Julia Somerville) which were completely innocent and this tool will do exactly the same as it's trained to be a big fishing net and not match exacts (ie - fuzzy hashing). Plus AI image recognition has been shown to be easily duped.

Because "what happens if this is extended to any crime?" is irrelevant here.

Wish that was the case but considering past history with tools being developed to help governments/agencies fight crime, they've ultimately been used for illegitimate reasons such as snooping on PM's/Presidents, heads of state, charity workers and journalists.

If some party/government/entity wanted to abuse this to cover other things, they could if they wanted to anyway, we can't stop them even if we wanted to.

I'm sure they already do, as you say it's not exactly new technology. Issue is, it's giving governments another very powerful tool that will unfortunately lead to more harm than good and will cause a lot of problems for innocent folk.

where can a privacy conscious individual turn now for a device that can't be scanned at will by big tech on the demand of government?

AOSP or known "harden" OS's i suspect.
Either way, (common-sense would suggest that) criminals will shift to other platforms and will already be encrypting data which will be out of the scope of the "scanner" so it's all a bit pointless; in the same was as backdooring encryption - it only screws the average user.
 
Last edited:
Installed the beta on my devices last night (iPad, iPhone and watch)

I love the notify when left behind feature in the find my app but has anyone else had issues activating it for the watch? For some reason it just doesn’t want to work hopefully it’ll be ironed out in the next release just a niggle is all
 
Latest iOS beta = first direct app now working. I presume this is near to the final release. I’ve run the developer beta since release and had little to no issues using it on main phone.
 
It's no surprising and this was the next logical step for Apple (or anyone) to "Scan" (read, snoop on) personal devices as they backed down on encrypting iCloud and hand out backups to agencies, were already scanning them for illicit material (same as other providers to be fair) and they have a lot of pressure from agencies and governments to "do more".
Problem is, it sets precedence for other providers to go down the same route so i imagine Google/Microsoft et all won't be far behind.

Two massive questions that i can see are what happens when something is flagged (rightly or wrongly) - do Apple then remotely download the source material (privacy issues) and manually check, or do they just go straight to alerting authorities resulting in people getting knocks on the door?

Secondly, what happens when a country demands for the scope to be changed (in the same way countries demand access to servers/infrastructure) to include, for example, propaganda material - will Apple roll over like they did with iCloud backups?



And there's been many, many false accusations (famously Julia Somerville) which were completely innocent and this tool will do exactly the same as it's trained to be a big fishing net and not match exacts (ie - fuzzy hashing). Plus AI image recognition has been shown to be easily duped.



Wish that was the case but considering past history with tools being developed to help governments/agencies fight crime, they've ultimately been used for illegitimate reasons such as snooping on PM's/Presidents, heads of state, charity workers and journalists.



I'm sure they already do, as you say it's not exactly new technology. Issue is, it's giving governments another very powerful tool that will unfortunately lead to more harm than good and will cause a lot of problems for innocent folk.



AOSP or known "harden" OS's i suspect.
Either way, (common-sense would suggest that) criminals will shift to other platforms and will already be encrypting data which will be out of the scope of the "scanner" so it's all a bit pointless; in the same was as backdooring encryption - it only screws the average user.
So I’m not going to say too much, but I have experience of the authorities picking up on something across the Atlantic and notifying our guys here and arresting somebody. I am eternally grateful to them for that and with that in mind I fully back CSAM. I understand the concerns though.
 
Installed the public beta on my 12 pro max, even though I know I probably shouldn’t have.

Bugs so far:
- Screen wakes up by itself randomly when sat on the table. No notifications etc. Seems to be a common issue currently.
- Password autocomplete in Safari occasionally breaks when using BitWarden. Looks like the app is occasionally crashing, something to do with specific app builds/compile methods. Way over my head.
- Occasional crashing apps. Twitter is one of them, possibly relates to the above issue with BitWarden (same compile method I’ve read)
- Siri suggestions widget randomly adds itself back to the widget stack I have on my home screen.
- Safari / WebKit occasionally crashes.

Nothing completely showstopping yet, though the screen wake issue could turn into a real issue if it tears through battery life. I may try and wait it out for beta 6 to see if things get better.
 
Back
Top Bottom