Indeed, performance data should be left to native resolution testing with none of these add in technologies such as RT, DLSS & FG.
RT, whilst a thing going forward, just impacts performance too much and DLSS & FG there to alleviate poor implementation/optimisation. Nice to have - but I think most of the time it's an add on poorly inplemented and if RT wants to become serious then they need to limit the impact on perfromance.
I get those who feel DLSS/FSR and now Frame Gen is cheating - FG I'm on the fence with, as whilst you do get more frames overall - half are delivered via another method and if you cant discernibly tell the difference and it feels like high FPS with no loss in quality then I'm getting more frames - half rendered traditionally, then insertion from some quick interpolation or whatever it uses, from the traditionally rendered frames.
Gfx cards back in the day had defined processors to do certain jobs - then we went to multistream processors (8800GT's?) that coukld do all tasks and switch between processing jobs to demand - current gfx cards seem to be introducing specific cores/processors to do certain roles again. such as Tensor cores etc. So whilst folk arent getting traditional increased raw processing power AMD & Nvidia are bringing new technologies to help get to 4k 120fps with minimal loss in IQ. These have a cost obviously, but folk slow to equate those costs as an alternative to raw traditional rasterization costs expected by many.
Increases in raw raster maybe hard to keep linear as in the past so new techs are implemented - would rather Nvidia removed the expensive GDDR6X from cards to cheaper GDDR6 and pass on the savings, as I see no advantages of using 6X over 6.