A bit of research about how VR heasdsets work would tell you that it's not wrong in steam...
Most VR headsets (except ironically the G1, I believe this was actually a WMR specific way of doing things previously, they just chose a different base reference for 100% which didn’t include the multiplier... there may also have been less distortion in the lenses used) render to a higher resolution buffer to apply barrel distortion correction to make sure the image looks correct after it has passed through the lens. For pretty much every headset if you look at their steam resolution vs panel resolution this works out to (give or take) around 1.4x the linear resolution values.
It looks bad at “native” 50ish% because that is ending up significantly under sampling the majority of the frame after distortion correction is applied. To achieve the optimal visual quality you want to be using at least 100% in steam if your GPU is capable. It may be necessary to reduce simply because the number of pixels you are asking your GPU to push is insane and performance may demand it, but it isn't necessary to reduce because of some mythical bug. That whole bug thing came about from people not understanding that their shiny new VR headset is not a simple monitor and that thoughts of "native panel resolution" don't apply in the same way.
As for sources - here from peterson himself since it doesn't get much more definitive
https://www.reddit.com/r/HPReverb/c...urce=share&utm_medium=ios_app&utm_name=iossmf
If you want to understand more in depth about why it is necessary, this is a very thorough explanation
https://www.youtube.com/watch?v=B7qrgrrHry0&t=654s
Also to clarify the point, there really isn’t such a thing as “native” rendering resolution in VR with the way things currently work. It’s like trying to unzip an image on a round ball and stuff it into a flat square without any data loss or empty space left at the end. You either under sample the whole image to varying degrees (under ~50% SS) over sample the whole image to varying degrees (over ~100% SS) or some combination of subsampling and super sampling depending on which part of the image you are looking at (any value ~50% to ~100%). If what you would consider “native” is to achieve 1 rendered pixel to 1 displayed pixel at the worst point of distortion then you need ~1.5x panel resolution on the G2, but ultimately that still isn’t really “native” as you are supersampling everything else for the sake of that worst point being 1:1. It is however “ideal” in terms of optimal image quality - going beyond can also still help with things such aliasing, so even the “ideal” resolution can be improved upon by going over 100% but with relatively rapidly diminishing returns.