• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GPU vram usage - how does it work?

Associate
Joined
11 Dec 2014
Posts
1,093
Location
Oxford
This was said in the Titan X thread in response to the card hitting high vram in COD AW:

GPUs will cache almost all of the data read by the card and only remove from cache when needed, this is why you can hit 10GB vram usage in a game and yet someone with a 4GB card could run the game without issues caused by a lack of vram. Most on this forum don't grasp this and think if they see a game exceed a certain amount of vram then cards with less vram will experience performance issues.

I have little knowledge of how vram usage works but this explanation seems quite intuitive especially given how 2/3GB cards have no trouble playing games with supposedly higher vram usage in 4GB cards. It would explain why higher resolutions demand more vram, because at any given time, there is more data to cache. On the other hand at standard resolutions we might be overestimating how much vram is really needed.
 
Last edited:
Some games save (cache) and offload textures and models from and to the buffer, the higher the resolution the faster this has to happen or the more VRAM the card has to have so the swapping occurs less or none at all (see COD AW mention below). (think what happens with your PCs ram while loading and exiting programs and repeating).

With COD 10GB is probably the whole map, characters and textures all in one go. Not actively needing to swap anything as it has everything available. This actually helps the game being smoother than other cards with no need to offload and load again. (I have a good 40 hours in COD AW MP on 4 different GPU setups)

Depending on the resolution and preset different GPUs will show different VRAM usage.

Just for reference when a game hits your cards VRAM cap usually your FPS is playable and randomly hits single digit numbers when swivling the camera or loading up a new scene.

I'm going to take a different example here for a game (Shadow of Mordor) with exact figures from 3 different cards and two different resolutions.


Note the usage at HD Ultra and 4K Ultra where it seems the cards are hitting their upper limits. The GTX680 hits the vram bottleneck a lot faster as it has less vram (2043mb) than the rest of the cards.

R9-290 4GB
GTX970 4GB
GTX680 2GB

HD Low
xzpYS4h.jpg
HD Med
07SRsW0.jpg
HD High
V0og0LY.jpg
HD Very High
XKcPSLi.jpg
HD Ultra
zFEEtN0.jpg
4K Lowest
qBYbZWi.jpg
4K Low
GdykzcP.jpg
4K Medium
DscFTiP.jpg
4K High
sPbS9o8.jpg
4K Very High
8BNxmpt.jpg
4K Ultra
CtwiMrV.jpg
 
Last edited:
What Straxusii is saying is true to an extent however he's defeating his own point, more VRAM and more 'caching' is good...Less is still bad!

DirectX only offers an interface to the device driver. There isn't much choice for the developer how this memory is allocated and it is mostly controlled by DX. A lot of this middleware can go unchecked and most of the time, GPUs have enough video memory to cope. Static resources go into VRAM such as textures and for the most part are only loaded once if the game is optimised well enough or there is enough memory capacity.

The developer has different routes he can choose for different types of resources. There are 'constant' buffers, and 'temporary' buffers for example. Constant buffers are stored in VRAM (geometry, textures) where as temporary buffers are mainly stored in system memory (canned effects and particles etc).

What people who play the "it's just caching gawd you're such a noob, man" tune don't tend to understand is that this is a very good thing. DX will capitalise on all the VRAM it can, to assure that the next frames can be displayed without a problem.

This is what VRAM, a buffer, hence the name frame buffer is


A recent example of this is Dying Light if you run the game at 1440p with the High texture setting. For the most part the game will carry on just fine hovering with around 3.8-9GB usage. However when engaging with enemies, the game will stutter because it isn't able to call upon any more memory without a performance penalty before the necessary space is discarded. This isn't souly down to as the less informed would call "bad porting", but more so because you are working at the higher end of your frame buffer.

So yes, games will cache additional data, some will work better than others with less depending on the demand. However more VRAM is GOOD less is BAD if you want a more seamless experience more often than you would with less :). We are ever increasingly getting games with higher fidelity textures and graphics which are seemingly getting more and more difficult to fit into memory space. "I find the 3GB is enough for such and such" argument tedious!
 
Last edited:
Great post sconey! A someone who has just jumped on the TItan X ship with its mighty, some say ridiculous, 12gb VRAM posts like this help as I'm having slight buyers remorse :-)
 
Good explanations here, and an interesting read.

Does SLI make a difference when the vram limit is reached? I know the memory doesn't stack, but I'm thinking the extra grunt helps clear the frame buffer quicker, and so reduces the frame dip? Or is all that process purely down to memory bandwidth?
 
Good explanations here, and an interesting read.

Does SLI make a difference when the vram limit is reached? I know the memory doesn't stack, but I'm thinking the extra grunt helps clear the frame buffer quicker, and so reduces the frame dip? Or is all that process purely down to memory bandwidth?

No difference with SLI the same happens . It has more to do with memory and PCI-E bandwidth.

Must see videos about VRAM and PCI-E speeds

 
Last edited:
Good explanations here, and an interesting read.

Does SLI make a difference when the vram limit is reached? I know the memory doesn't stack, but I'm thinking the extra grunt helps clear the frame buffer quicker, and so reduces the frame dip? Or is all that process purely down to memory bandwidth?

SLI increases memory requirement slightly to accommodate for AFR, if anything when in the upper limits of the frame buffer it makes the situation worse. More often than not one wouldn't notice the difference providing there is enough memory.
 
What Straxusii is saying is true to an extent however he's defeating his own point, more VRAM and more 'caching' is good...Less is still bad!

DirectX only offers an interface to the device driver. There isn't much choice for the developer how this memory is allocated and it is mostly controlled by DX. A lot of this middleware can go unchecked and most of the time, GPUs have enough video memory to cope. Static resources go into VRAM such as textures and for the most part are only loaded once if the game is optimised well enough or there is enough memory capacity.

The developer has different routes he can choose for different types of resources. There are 'constant' buffers, and 'temporary' buffers for example. Constant buffers are stored in VRAM (geometry, textures) where as temporary buffers are mainly stored in system memory (canned effects and particles etc).

What people who play the "it's just caching gawd you're such a noob, man" tune don't tend to understand is that this is a very good thing. DX will capitalise on all the VRAM it can, to assure that the next frames can be displayed without a problem.

This is what VRAM, a buffer, hence the name frame buffer is


A recent example of this is Dying Light if you run the game at 1440p with the High texture setting. For the most part the game will carry on just fine hovering with around 3.8-9GB usage. However when engaging with enemies, the game will stutter because it isn't able to call upon any more memory without a performance penalty before the necessary space is discarded. This isn't souly down to as the less informed would call "bad porting", but more so because you are working at the higher end of your frame buffer.

So yes, games will cache additional data, some will work better than others with less depending on the demand. However more VRAM is GOOD less is BAD if you want a more seamless experience more often than you would with less :). We are ever increasingly getting games with higher fidelity textures and graphics which are seemingly getting more and more difficult to fit into memory space. "I find the 3GB is enough for such and such" argument tedious!

Yes more Vram is good even if you do not need it now at some point you will and if the Gpu cannot push out the frames at a good pace you can always add a second but you can not add more Vram.
 
As a slight counter-point to 'more memory is good because caching is good' (which is totally valid by the way) the gains from caching decrease as stuff is accessed less often, so you may be getting little or nothing from the upper quantities as you may possibly be caching textures that will never again be used etc.
Having extra memory will also be reducing your max memory clock speeds, hence the original titan had 3GB disabled by serious benchmarkers.

Having said that, basically more is better, assuming the extra money spent on it didn't cause a more significant reduction in components elsewhere.
 
Yes more Vram is good even if you do not need it now at some point you will and if the Gpu cannot push out the frames at a good pace you can always add a second but you can not add more Vram.

Until DX12 is released which will allow for SLI/Crossfire cards vram to be available. So 2 x 980 4gb will have 8gb of VRAM available. If I read it right.
 
http://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_Titan_X/33.html

"Modern games use various memory allocation strategies, and these usually involve loading as much into VRAM as fits even if the texture might never or only rarely be used. Call of Duty: AW seems even worse as it keeps stuffing textures into the memory while you play and not as a level loads, without ever removing anything in the hope that it might need whatever it puts there at some point in the future, which it does not as the FPS on 3 GB cards would otherwise be seriously compromised."

I feel COD:AW might be an exception more than the rule in terms of regarding how it can cache to the absolute extreme everything which may not even be required. No other game in the world would use 10GB vram at 1440p AFAIK, so I'm not suddenly thinking COD:AW is the best or normal way games should use vram.
 
Back
Top Bottom