I partially agree that testing could be better as we have seen time and time again, many people blame some "game" issues because of lack of vram or something else and turns out once the game gets patched or/and driver update, the issue because of "vram limitations" suddenly disappears....
DF are very good and probably the best (they even picked up deathloops texture loading/rendering upon fast movement/camera angle adjustment on lesser vram cards, something that no one else picked up on) but sadly a lot of people have written them of as "nvidia shills" due to their love for ray tracing and dlss, same way those same people have also written of HU, GN etc. for also being "nvidia shills", I think techpowerup are also on that list now too
It's pointless sites like pcgh, techpowerup just showing how much vram is used as per my comment above, it means nothing as we have witnessed.... frame latency is the main thing I would like to see more of when it comes to vram bottlenecks as it is always the first thing to suffer/indicate there is a vram issue/bottleneck and FPS average/min/max bar charts don't show this well.
I agree some issues may not be VRAM related, the issues disappearing after a patch doesnt necessarily mean VRAM was never the issue though, in one game where issues disappeared, the system RAM usage went up post patch which suggests to me in that game they moved some assets from VRAM to system RAM to resolve it (no problem with this, if it works it works). Agree also reference TPU etc.
I just feel that we in an era where the game itself may dynamically adjust to its hardware environment, its going to fool testing methods that assume a static game configuration. I am curious how DF are measuring resolution and frame times, as they clearly doing it different to everyone else, especially as they can do it on consoles. We then perhaps need the industry to move onto what they doing.