Disagree on the first point, but agree with the rest. RT, to be visually realistic, requires the fusion of two separate technologies neither of which are mature. Simulating the real world physics of light requires a lot of computational power ( NVIDIA are getting there with dedicated RTX chips and machine learning ), but that's only one part of the equation. The other element that no-one really mentions is how our eyes adapt to scenes with a high ( or low ) dynamic range. It's all very well creating an interior scene with realistic lighting, but in the real world our pupils would dilate/constrict to compress or heighten the dynamic range to allow us to see more than what an RT representation does currently. VR technology is beginning to go down that path but is a long way off being mainstream.I think RT does add to the the experience when done properly on a machine that has the computational power.
However, it is more of a hinderance than a help with console games on hardware with vastly less processing power than is needed to properly implement RT.
I firmly believe the net benefits of proper dedication to engine work, coding and optimisation will always outweigh clumsy attempts to add RT.
I don't know if you do photography, but RT reminds me of the HDR fad 10-12 years ago when software allowed photographers to bring more "pop" to their images by boosting/extracting more tonality from their RAW files. Of course, there was a minority who dialled the settings to the max, and could never understand why others didn't think their photographs were realistic. RT, for me, is in that phase. It's all dialled to the max, to the point of being unrealistic in many cases, and fails to recognise that "realism" is a combination of physics and how our senses perceive the world around us.
Last edited: