Associate
- Joined
- 1 Oct 2009
- Posts
- 1,033
- Location
- Norwich, UK
its very difficult to run scenarios when folks are not presenting any kind of underlying theory to how vram works in most games..
The theory I've forwarded is quite simple. The purpose of putting data into vRAM is so the GPU can use that data to construct the next frame, and the more complex the scene the GPU has to render the longer it takes to do it, and thus the frame rate goes down. That is to say you put more data into vRAM and your performance is going to go down because the GPU has more work to do. Then it becomes a simple matter of bottlenecks. Which gives out first? Do you run out of vRAM with performance overhead left, or do you run out of performance with a vRAM overhead left. And the answer to date in the latter. As we load up modern games into 4k Ultra presets we see the GPUs struggling with frame rate, with vRAM usage way below 10Gb.
The major advancement in our understanding with this whole thing also has to do with now having tools to more accurately report vRAM in use, rather than what is allocated, as the 2 values can differ substantially, and typically do differ.