So this post is a bit of a rant but I am hoping someone who knows more than me about optical engineering can offer some insight.
The Portrait mode feature on iPhones and other smart phones to me looks completely unnatural. As this has just been a ‘gimmick’ on smartphones over the last few years it was fine but now I noticed during the Rugby World cup they were clearly using footage with the background artificially blurred. It looked awful. You could literally see parts of the player‘s top becoming blurred.
When you take a photo with a camera and focus on the subject, the background will be blurred at different degrees depending on the depth. A pin sharp 1.8f portrait will have the face razor sharp in focus, the horizon completely blurred and other closer elements slightly blurred. When you use AI to just blur the background it loses that sense of depth. It’s a binary IN or OUT of focus.
I honestly think it looks horrific and I accept it’s a fun gimmick on smartphones but I’m annoyed to see it is now manifesting itself in commercial applications.
The thing is even the standard lens in my 12 Pro Max takes great photos with bokeh WITHOUT the artificial manipulation.
Take delivery of the 15 Pro Max tomorrow so maybe they’ve improved it but AI cannot possibly manually adjust different elements of focus based on depth from a 2D image so it’s never going to look good… this is where I was wondering if anyone could offer insight?
The Portrait mode feature on iPhones and other smart phones to me looks completely unnatural. As this has just been a ‘gimmick’ on smartphones over the last few years it was fine but now I noticed during the Rugby World cup they were clearly using footage with the background artificially blurred. It looked awful. You could literally see parts of the player‘s top becoming blurred.
When you take a photo with a camera and focus on the subject, the background will be blurred at different degrees depending on the depth. A pin sharp 1.8f portrait will have the face razor sharp in focus, the horizon completely blurred and other closer elements slightly blurred. When you use AI to just blur the background it loses that sense of depth. It’s a binary IN or OUT of focus.
I honestly think it looks horrific and I accept it’s a fun gimmick on smartphones but I’m annoyed to see it is now manifesting itself in commercial applications.
The thing is even the standard lens in my 12 Pro Max takes great photos with bokeh WITHOUT the artificial manipulation.
Take delivery of the 15 Pro Max tomorrow so maybe they’ve improved it but AI cannot possibly manually adjust different elements of focus based on depth from a 2D image so it’s never going to look good… this is where I was wondering if anyone could offer insight?