• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Why - "Is 'x' amount of VRAM enough" seems like a small problem in the long term for most gamers

Did we really need another thread on VRAM there was only one way it would go.
The other one is at 208 pages now. Plus before anyone says thats only about the 3080 its long long since lost that subject.

This is not a "is this enough VRAM thread". This is a thread about why people keep asking this question because the OP thinks it has never been an issue in the past.

I reminded him and everyone else that the there have been very unbalanced GPUs in the past with low VRAM. These GPUs hit VRAM limits in many cases before they ran out of GPU grunt. For example when Dragon Age inquisition came out and the 7950 was getting a reasonably playable 40 FPS average and lows of 30ish with medium/high settings and the GTX 680 was getting 25 FPS (and lows of 5).

So will this impact the 3070 which has plenty of GPU grunt but with 8GB VRAM. is what many consider on the cusp of not enough for 4K and even 1440p. Time will tell.
 
This is not a "is this enough VRAM thread". This is a thread about why people keep asking this question because the OP thinks it has never been an issue in the past.

I reminded him and everyone else that the there have been very unbalanced GPUs in the past with low VRAM. These GPUs hit VRAM limits in many cases before they ran out of GPU grunt. For example when Dragon Age inquisition came out and the 7950 was getting a reasonably playable 40 FPS average and lows of 30ish with medium/high settings and the GTX 680 was getting 25 FPS (and lows of 5).

So will this impact the 3070 which has plenty of GPU grunt but with 8GB VRAM. is what many consider on the cusp of not enough for 4K and even 1440p. Time will tell.

so it is another ‘is x vram enough’ thread
 

In the testing done in this video seems the 2GB and the 4GB 680 perform pretty much the same.


The average between the 4GB and 8GB 5500XT is about 5fps


Also from what I remember there was loads of people on here saying back in the day that the older 4GB cards/ Intel CPU's with HT wasn't worth the extra cost over the 2GB cards/Intel CPU's without HT.
 
The 680 (again was meant to be higher/mid range) could not use its full 4GB as not enough GPU grunt but 2GB was not enough in newer games and the 1440p (even 1080p)+Setting esp AA cranked up
 
I've always been firmly in the Grunt over VRAM camp; if your card is too slow no amount of VRAM can compensate for that. VRAM can generally be worked around by lowering certain settings, I don't think I've ever had a situation where I've felt screwed by lack of VRAM, but I've definitely been screwed by lack of raw horsepower. That doesn't mean I've never hit a VRAM limitation, it just means whenever I've hit a VRAM limitation it's always been easy to resolve.
Putting lots of (slow) memory with low end GPU or renamed outdated model is sure one selling tricking.
But when did this thread become about those cards?
Or do you suddenly consider 2080 Ti/3070 as garbage?

And GPU power starting to fall behind required causes gradual/linear loss of framerate.
Which can be often mitigated easily by lowering little actual visible effect/heavy processing power need settings.
(unless talking about lower cards already when new)

But when GPU runs out of VRAM that will cause hard drop of framerate/judder, or then some texture flicker/artefacts.
Having to reach to system RAM simply induces heavy bandwidth and latency penalty compared to card's VRAM.
(asset streaming from SSD will be even worser in bandwidth and latency)
Unless that running out of RAM is only small and happening occasionally that's lot harder to solve without more notable effect to graphics quality.
 
I dispute that running low on VRAM is inherently harder to resolve than running out of horsepower though. Like if you lack the fillrate you are screwed, doesn't matter what settings you are choosing only option is to reduce resolution which is catastrophic in the modern era with LCD screens (native resolution).

That's not to deny the possibility you can have a genuinely severe VRAM shortage but more fool you if you are e.g. buying a 4GB card in modern era.
 
I dispute that running low on VRAM is inherently harder to resolve than running out of horsepower though. Like if you lack the fillrate you are screwed, doesn't matter what settings you are choosing only option is to reduce resolution which is catastrophic in the modern era with LCD screens (native resolution).

That's not to deny the possibility you can have a genuinely severe VRAM shortage but more fool you if you are e.g. buying a 4GB card in modern era.

Nobody is suggesting buying a 4GB GPU in the modern era. This thread is not about specific GPUs but about the concept that VRAM limitations are rare and should not be something most users need to worry about. So the OP is asking why do we keep seeing these questions about "is x VRAM enough" when it's rarely ever an issue (he alleges).

The OP is correct to a large degree but there has been a number of occasions when GPUs were poorly balanced with low total VRAM and it became a major issue before they ran out of GPU grunt.

These threads tend to pop up when there are GPUs being sold as mid-top tier with what many are worried is precariosuly low VRAM.
 
Last edited:
TL; DR

What I was getting at, is that I think people don't need to worry much about buying graphics cards with 8GB of VRAM this year, you can just upgrade next year or 2023, likely at little to no extra cost (the caveat here is the price you brought your graphics card for, which this year has been almost entirely decided by if you got a AIB or reference model).

Maybe this is slightly optimistic and I know not everyone likes to upgrade their GPU every year or so. Before getting a RTX 3070 FE, I was using a R9 390 bought in 2015. So, I waited 5-6 years and got a card with the same VRAM capacity lol. Would I have brought a RTX 3070 with 12/16GB of VRAM for an extra £50? Probably, but no more than this.

But, the resale price of graphics cards is very high, so why not just upgrade every year? The more frequently you do this, generally, the higher the resale price. The rules for buying reference / FE models seems to be 1 per generation at the moment (per household).

Upgrading a graphics card is a piece of p*ss (if you have a half decent power supply). I think the production capacity will improve a bit next year too, it will be interesting to see how switching to 6/5nm GPU dies affects this.
 
Last edited:
My general thought on this is that people in the past have been stung by vRAM limitations and don't want a repeat, which is fair enough. It's a perfectly good question to ask, the answer is complex because of how the vRAM usage has changed over the years, how long you intend to keep your video card, what settings you use and a whole host of other factors.

Memory bandwidth is important just as vRAM is, but they both have the same characterization which is that they're subservient to the GPU. The GPU is what does the heavy lifting, it's actually doing the calculations which generate the next frame. It will do that as fast as it can, and either you have enough memory bandwidth to feed the GPU so it's not bottlenecked, or you don't. And you either have enough vRAM to service the GPU or you don't. So they suffer from the same problem which is that you want enough so as not to cause any kind of bottleneck for the GPU to do its job, but you don't want to over provision because memory costs money. If you over provision either the speed or size of memory all you do is increase the cost to manufacture the GPU which is a cost that is passed on the consumer as increased price.

With future proofing and vRAM, these days it's less of a problem than it used to be. Namely because vRAM used to be predominantly used for texture budgets in older games, whereas today much more of the vRAM is used on other effects. Those effects also have more severe performance implication on the GPU. And so as you load up on vRAM usage in modern games you also load on the GPU and the GPU only has so much raw grunt before it's not going to give you playable frame rates. We have numerous examples where you can get near 10Gb of usage on a 3080, but the performance is unplayable at the settings required to do that, and so the point is moot. Again the goal of the GPU manufacturers is to assign as much vRAM as the GPU can reasonably use and no more, anymore is a waste.

Going forward this newer trend is only going to get more acute as we move towards much faster real time streaming of assets from super fast m.2 drives with DirectStorage, that's going to really keep a lid on how much asset data we need to store in vRAM at any one moment. The faster the I/O to vRAM is, the more we can push vRAM towards using that memory for effects that are being drawn on screen in the current frame, rather than a kind of dumb cache of textures a lot of which aren't even in use. That's kinda old school now, in fact the PC for once is actually lagging in this regard, way behind the consoles. Their adoption of this will drive us in that direction very rapidly once developers catch on and the engines start making full use of it.
 
TL; DR

What I was getting at, is that I think people don't need to worry much about buying graphics cards with 8GB of VRAM this year, you can just upgrade next year or 2023, likely at little to no extra cost (the caveat here is the price you brought your graphics card for, which this year has been almost entirely decided by if you got a AIB or reference model).

Maybe this is slightly optimistic and I know not everyone likes to upgrade their GPU every year or so. Before getting a RTX 3070 FE, I was using a R9 390 bought in 2015. So, I waited 5-6 years and got a card with the same VRAM capacity lol. Would I have brought a RTX 3070 with 12/16GB of VRAM for an extra £50? Probably, but no more than this.

But, the resale price of graphics cards is very high, so why not just upgrade every year? The more frequently you do this, generally, the higher the resale price. The rules for buying reference / FE models seems to be 1 per generation at the moment (per household).

Upgrading a graphics card is a piece of p*ss (if you have a half decent power supply). I think the production capacity will improve a bit next year too, it will be interesting to see how switching to 6/5nm GPU dies affects this.

Pretty much all of that ignores the curent supply and demand issue that does not look to be going away anytime soon. I hope I am wrong but I bet that both Nvidia and AMD will find a way to keep demand and prices high.

We already see this from Nvidia with the joke 3080Ti. instead of building more 3080 GPUs to meet demand, Nvidia decided to release a massively overpriced GPU nodoby can get.
 
as you load up on vRAM usage in modern games you also load on the GPU and the GPU only has so much raw grunt before it's not going to give you playable frame rates. We have numerous examples where you can get near 10Gb of usage on a 3080, but the performance is unplayable at the settings required to do that, and so the point is moot.
Yes this is the point I was making about the RE Village benchmark that someone cited earlier; you can construct a scenario where a 3060/6700XT with 12/16GB VRAM will outperform a 3070 with 8GB. But in doing so you gotta enable RTX and max settings and you end up not being able to push the high framerate you want anyway. So it would basically suit people with 60hz monitors.

As for 3080Ti makes perfect sense to me, they've realised they dropped the ball with the 3080 by making it too cheap, now they can divert a lot of the silicon that would've gone into 3080s into the 3080Tis instead and make more profit. So I would expect to see better availability on the 3080ti than 3080 in future - why sell for £649 when you can sell for £1049 (acknowledging it uses a bit more memory).
 
The thing is with VRAM limits is it makes absolutely zero difference to any game, until you hit the cap. SO you get no benefits, unlike increased GPU grunt or memory bandwidth. Once you hit the limit then it is usually just a case of reducing the texture quality form some extreme level down to ultra quality - soemthign that unless you are pixel peeping on a static image at 4K sat 3cm form the screen you would never till the difference.
 
For example when Dragon Age inquisition came out and the 7950 was getting a reasonably playable 40 FPS average and lows of 30ish with medium/high settings and the GTX 680 was getting 25 FPS (and lows of 5).
Yeah i wouldn’t call 30fps reasonable on a PC in my view that’s definitely running out of grunt.
 
you are considering 30 fps not reasonable. you're a person that will never hold on to a 7950 for long. so its really pointless to bring your own perspective in this topic. the topic is about people who like to run with these gpus for a long time. surely they won't mind "double digit" frames or getting locked 60 frames or whatever the bar some people have set for themselves. for a gamer who will run an old gpu for a long time, you can bet 30 fps will still be enjoyful and acceptable, as long it is evenly frame paced and have tight %1 lows. so the choice between an nvidia and amd gpu gpu can make a ton of difference in terms of longevity. 7950 never had that much grunt to begin with. so diferecen between a hd 7950 and gtx 760 is huge. if that user was tolerant of what 7950 was bringing to the table, you can bet they will tolerate 30-40 fps as well. but if they went with the 760, they won't even be able to get 30 in some games, and surely, that is what we're discussing here
 
Yeah i wouldn’t call 30fps reasonable on a PC in my view that’s definitely running out of grunt.

Try looking at the actual point being made instead of thinking with your narrow blinkered viewpoint. 30 minimums from a mid tier GPU is certainly far more reasonable than 5 FPS minimums from a GPU that was sold as a higher tiered item. It is certainly within what many (in fact I would argue the majority) consider playable if we look at consoles and steam surveys etc.
 
I remember there was this same discussion/argument between GTX970 3.5GB/GTX 980 4GB vs the 390/390X 8GB on whether or not having 8GB matters; then fast forward a couple of years there are games using High-res texture which the 390/390X has no problem using for games, while the 970/980 simply don't even meet the requirement for enabling them.

I think it discussion is not whether or not X amount of enough, but more about if people are already playing above a certain amount money they would expect very clean-cut that vram will not be the factor that bottleneck the graphic card performance, especially with SLI and Crossfire is already discontinued. With even older at sub £200 few years back already having 8GB, it does make people question that something as expensive as the 3070 and the 3080 only have 8GB and 10GB prospectively. It's like building a new system now one could argue that 8GB system memory would still be "sufficient" for gaming, but nobody is comfortable with less than 16GB for their system in this day and age.
 
I remember there was this same discussion/argument between GTX970 3.5GB/GTX 980 4GB vs the 390/390X 8GB on whether or not having 8GB matters; then fast forward a couple of years there are games using High-res texture which the 390/390X has no problem using for games, while the 970/980 simply don't even meet the requirement for enabling them.

I think it discussion is not whether or not X amount of enough, but more about if people are already playing above a certain amount money they would expect very clean-cut that vram will not be the factor that bottleneck the graphic card performance, especially with SLI and Crossfire is already discontinued. With even older at sub £200 few years back already having 8GB, it does make people question that something as expensive as the 3070 and the 3080 only have 8GB and 10GB prospectively. It's like building a new system now one could argue that 8GB system memory would still be "sufficient" for gaming, but nobody is comfortable with less than 16GB for their system in this day and age.

This is a point I have been trying to highlight. These type of threads crop up only when new GPUs have what are seen as precariously low VRAM amounts. We inevitably end up with split opinions because many enthusiasts wrongly assume everyone upgrades their GPU ever year.

I would not consider a 3070 as a viable option if I were going to keep it for 3-5 years. It is why I bought a 6700 XT for my son's PC as I would intend to keep it for at least 3 years. Contrary to what some think, future-proofing a GPU is a very common thing. I didn't rule it out as an option because I think it will be a problem, but because I feel it might be and prefer not to take the risk.
 
Last edited:

In the testing done in this video seems the 2GB and the 4GB 680 perform pretty much the same.

Also from what I remember there was loads of people on here saying back in the day that the older 4GB cards/ Intel CPU's with HT wasn't worth the extra cost over the 2GB cards/Intel CPU's without HT.

GTX 670s and 680s did perform the same between 2gb and 4gb versions until you start using them at high resolutions where the 2gb cards FPS fell off a cliff.

I still have some of these 2gb cards (690s) and these days all they are good for is reading emails and browsing the net.
 
Back
Top Bottom