Hi - not posted here before and only recently started playing with photography on my Lumix LX2 so forgive my nooby questions 
So HDR rendering, I take bracketed shots of -1,0,+1 EV and photomatrix does its thing. Lovely. But why can't you just programatically over and under expose a single, normally exposed image to create the final HDR render? Or better still, why can't HDR algorithms simply work out the final image from a single frame.
I take it there is 'more information' in a set of three distinct images than a single frame can provide if you were to under and over expose it in CS2 say, but that sugests to me that the exposure adjustment in CS2 is a hack and doesn't really do what it says on the tin. Am I missing something?
Thanks

So HDR rendering, I take bracketed shots of -1,0,+1 EV and photomatrix does its thing. Lovely. But why can't you just programatically over and under expose a single, normally exposed image to create the final HDR render? Or better still, why can't HDR algorithms simply work out the final image from a single frame.
I take it there is 'more information' in a set of three distinct images than a single frame can provide if you were to under and over expose it in CS2 say, but that sugests to me that the exposure adjustment in CS2 is a hack and doesn't really do what it says on the tin. Am I missing something?
Thanks

Last edited: