• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Is 16GB of Vram The Standard For High End Graphic Cards?

Status
Not open for further replies.
The 80 is not that much faster than the 70, its literally 1 or 2 higher in game settings or the same reduced settings in FC6 but higher fps.

Considering it took 433 days to get a £649 3080, there were no drops for about 3 months before I got it, dibs n drabs of drops before that, others still can't get a sniff of them, I did get a launch price aib 3070 to tide me over which was the right call...

Meanwhile, despite Nv adding another 0 to the transaction to get into the 12Gb comfort zone(which tells its own story), mid range 80 owners are 100% adamant their budget gpus are bulletproof-hint thats why there's 3 and nearly a 4th better gpu than the 10gb 80.

The 70/80 are damn fine cards at msrp, still doesn't detract the fact they can run out of vram.

I think the 3080 is 20-25% faster than the 3070. That's a decent jump.

As I've said many times already, I can put together a clone of the original Pong that claims you don't have enough VRAM even with a 3090. This is essentially what AMD's sponsorship with Ubisoft did. What puzzles me is why so many people aren't directing frustration at AMD when in the past I don't remember people being so happy with Nvidia for their over use of tessellation. Perhaps the poor quality of clickbait Youtubers are to blame, or perhaps it's just down to people not using their brain, probably a mix of both.

12GB 3080? Nvidia are just pushing a new product to keep sales up as many would have held off for Lovelace. I imagine yeilds have improved meaning they don't need to fuse off so much of the GPU. From my own experience with a 10GB 3080 and games with maxed out settings I have to say 10GB is plenty. If I had a 3070 I would limit my resolution and/or settings, while enjoying the money I saved!
 
Some reason most vocal members are adamant all purchases of current gen cards are for Founder Edition prices. I think you would widen the audience and soak up that many users bought AIB versions and paid well over the odds for the same hardware, however out of embarrassment or voyeuristic tendencies most wont participate in these kinds of debates.

People will have been spending over a grand for a regular 3080, and now we have some months later the 3080Ti and 12Gb iterations of the sku which also are well over a grand. So going back to the previous sentence, this £649 benchmark is best case scenario. I would love to view a poll that accurately presented what people actually paid.
 
I think the 3080 is 20-25% faster than the 3070. That's a decent jump.

As I've said many times already, I can put together a clone of the original Pong that claims you don't have enough VRAM even with a 3090. This is essentially what AMD's sponsorship with Ubisoft did. What puzzles me is why so many people aren't directing frustration at AMD when in the past I don't remember people being so happy with Nvidia for their over use of tessellation. Perhaps the poor quality of clickbait Youtubers are to blame, or perhaps it's just down to people not using their brain, probably a mix of both.

12GB 3080? Nvidia are just pushing a new product to keep sales up as many would have held off for Lovelace. I imagine yeilds have improved meaning they don't need to fuse off so much of the GPU. From my own experience with a 10GB 3080 and games with maxed out settings I have to say 10GB is plenty. If I had a 3070 I would limit my resolution and/or settings, while enjoying the money I saved!

Yeah not sure how one can say the 3080 is "not that much faster", it easily beats a 3070 :confused:

Af1GhEo.png

DniZo1S.png

Gap is even bigger if you factor in RT....

Very odd this place and how certain people have a serious hate towards the 3080 or/and nvidia.... Is it because some people ended having to pay over double the price to secure a gpu which only offers 15-20% perf increase or/and people having to wait so long to get a FE.... Like I said before, it's always the same old on here, swings and roundabouts:

- nvidia tessellation fiasco
- fury x 4gb HBM fiasco
- fermi 480 power and temp fiasco

etc.

Essentially nvidia bad, amd good.

I have also tried educating some on the vram stuff and even pointed out cases where the 3080 does run out of vram, however, in said scenario said system or/and said game manages vram/memory allocation properly i.e. you don't get a case where fps drops to 1 fps and stays at that regardless..... :o

People also don't understand why nvidia like to saturate the market with various cards, it gives "everyone" something they want, same way Samsung have so many different models of their mobile phones.

PS. this thread I created shows stats of quite well for FE and AIB owners:

https://forums.overclockers.co.uk/t...eference-or-aib-model.18944597/#post-35335054

Like I and others said in that thread, at least with nvidia, we have the choice of being able to get a card for MSRP unlike amd cards, "zero" chance of that but nope, remember, nvidia bad, amd good :cry:
 
Your thread would actually be biased, due to asking people on a computer forum, and I don't think it is a good indicator of the market as a whole.

No doubt that is the case but it was mainly to prove to all the people on this sub-forum that is is very "possible" to get a FE, you had a few people making claims like:

- "nvidia only sell to miners", even though all the mining farms you see have AIB cards, not FE cards...
- "no one can buy a FE card"
- "most people have to wait a year etc. to get a FE card"
- "FE/MSRP is fake"

etc.

That thread says otherwise.
 
Your thread would actually be biased, due to asking people on a computer forum, and I don't think it is a good indicator of the market as a whole.

Correct. As I also said most would abstain or not declare what they paid because lets face it if you didn't get a unicorn pre-order honoured or a Founders Edition you likely paid too much. Same can be said for the 6000 series AMD cards.
 
I think the 3080 is 20-25% faster than the 3070. That's a decent jump.

As I've said many times already, I can put together a clone of the original Pong that claims you don't have enough VRAM even with a 3090. This is essentially what AMD's sponsorship with Ubisoft did. What puzzles me is why so many people aren't directing frustration at AMD when in the past I don't remember people being so happy with Nvidia for their over use of tessellation. Perhaps the poor quality of clickbait Youtubers are to blame, or perhaps it's just down to people not using their brain, probably a mix of both.

12GB 3080? Nvidia are just pushing a new product to keep sales up as many would have held off for Lovelace. I imagine yeilds have improved meaning they don't need to fuse off so much of the GPU. From my own experience with a 10GB 3080 and games with maxed out settings I have to say 10GB is plenty. If I had a 3070 I would limit my resolution and/or settings, while enjoying the money I saved!

So, the 20-25% gets you higher fps or same fps with a couple of higher settings, thats what I said, for example to match the 80@60fps would be moving one rtx setting and one non rtx setting from high to medium takes you to the same fps.

Easier to notice when you have both for comparison, being in the position to having both cards, thats not much of a jump in my book, slightly better sums it up my end.

You and everyone else is going off of charts etc, so maybe harder to understand that with just two reduced settings can enable fps parity isn't a massive difference until that reality is sitting right there to see first hand.

A 70 to 80ti, that'd be what I'd call a decent but more importantly a noticeable jump imo.



Moaning at AMD for adding high textures that bust 10gb never mind 8gb, while complaining AMD don't want higher RT effects because they play to their strenghts like Nv do with less vram use, higher RT'ing, I get the sentiment behind it but it's double standards mate.



Your waiting/waited for lovelace but decide no, I'm going to throw more than a grand at a 12gb Nv product, don't think so.



The 3070 was running 1440p most of the time on settings within it's comfort zone, nothing but good things to say about it other than the stingy vram, just like the 3080, if you want future proofing it's over a grand.

@gpuerrilla

The budget minded user that managed to actually get their hands on a mid ranged 70/80 at msrp is great vfm, non msrp of any brand is an absolute bump, idk if you saw gibbo's perspective on FE's:


NVIDIA is a global company that has bigger markets it sells into than the UK. FE cards are only sold in some markets, whereas AIB cards are sold in all markets, particular the bigger brands. It is probably safe to say that a company like Asus alone has sold more cards than FE cards sold, then factor in all the other partners, MSI, Gigabyte, Zotac, Palit, Inno3D, Galax, EVGA, Manli, PNY, Gainward, plus the Far East partners.

https://forums.overclockers.co.uk/posts/35393317/




Tldr, I think we are witnessing mid ranged Nv users thinking they are running high end gpus on a budget, forgetting/ignoring their gpu's can't handle settings that the real heavy hitting cards can.
 
@gpuerrilla

The budget minded user that managed to actually get their hands on a mid ranged 70/80 at msrp is great vfm, non msrp of any brand is an absolute bump, idk if you saw gibbo's perspective on FE's:

https://forums.overclockers.co.uk/posts/35393317/

Tldr, I think we are witnessing mid ranged Nv users thinking they are running high end gpus on a budget, forgetting/ignorimg their gpu's can't handle settings that the real heavy hitting cards can.

I have no beef with anyone buying a GPU per se. What I do notice though is the soap boxes. We only see confident posts from users that managed to snag a 3080 for RRP or very close to it justifying their purchases (or maybe just gloating?) by pointing out everyone else paid over the odds for theirs. Ironically some of these folk then sell their flagship product to pocket. Some of these folk then buy a card with more VRAM (see willhub above), or they get a different one which wont teeter on the issue (see TNA). I guess it doesn't really matter as most enthusiasts on here will be shuffling soon, but to stick to the OP topic, it seems to some it doesn't matter and to others they have either avoided it by not buying or they have sold on the regular 3080 so it wont be a problem to worry about!
 
The 80 is not that much faster than the 70, its literally 1 or 2 higher in game settings or the same reduced settings in FC6 but higher fps.

Considering it took 433 days to get a £649 3080, there were no drops for about 3 months before I got it, dibs n drabs of drops before that, others still can't get a sniff of them, I did get a launch price aib 3070 to tide me over which was the right call...

Meanwhile, despite Nv adding another 0 to the transaction to get into the 12Gb comfort zone(which tells its own story), mid range 80 owners are 100% adamant their budget gpus are bulletproof-hint thats why there's 3 and nearly a 4th better gpu than the 10gb 80.

The 70/80 are damn fine cards at msrp, still doesn't detract the fact they can run out of vram.
Those 3 or 4 Gpus above the 3080 are at best only 15% faster though and only 10% with an OC while the 3080 is 25-30% faster than a 3070.

No one is saying the 3080 is bullet proof but when the 3080 struggles in the majority of games so will the ti / 90. There will always be a couple of outlier games in the meantime like FC6 but is it really worth spending £600 extra on a GPU with more VRAM just to play those handful of games at max settings when the rest run absolutely fine.

Tldr, I think we are witnessing mid ranged Nv users thinking they are running high end gpus on a budget, forgetting/ignoring their gpu's can't handle settings that the real heavy hitting cards can.

High end performance at a reasonable price is nothing to be sniffed at nowadays, I certainly wouldn't pay £400 more for 10% performance and 2gb VRAM.
 
Last edited:
I have no beef with anyone buying a GPU per se. What I do notice though is the soap boxes. We only see confident posts from users that managed to snag a 3080 for RRP or very close to it justifying their purchases (or maybe just gloating?) by pointing out everyone else paid over the odds for theirs. Ironically some of these folk then sell their flagship product to pocket. Some of these folk then buy a card with more VRAM (see willhub above), or they get a different one which wont teeter on the issue (see TNA). I guess it doesn't really matter as most enthusiasts on here will be shuffling soon, but to stick to the OP topic, it seems to some it doesn't matter and to others they have either avoided it by not buying or they have sold on the regular 3080 so it wont be a problem to worry about!
Pretty much, and the gloating or the easy to get mantra from experience was well frustrating trying to get my 80, to think that folk in here are actually under the impression that I went 'nah, I'll wait a year and a half to snag a 3080' is bonkers.

Those 3 or 4 Gpus above the 3080 are at best only 15% faster though and only 10% with an OC while the 3080 is 25-30% faster than a 3070.

No one is saying the 3080 is bullet proof but when the 3080 struggles in the majority of games so will the ti / 90. There will always be a couple of outlier games in the meantime like FC6 but is it really worth spending £600 extra on a GPU with more VRAM just to play those handful of games at max settings when the rest run absolutely fine.



High end performance at a reasonable price is nothing to be sniffed at nowadays, I certainly wouldn't pay £400 more for 10% performance and 2gb VRAM.
I agree with most of that apart from 30% between the 70 and 80, 25% best case scenario but not much to disagree over.

It's diminishing returns for getting out of the mid range, this is the way with Nv for generations.

The £400+ pp imo, other than the 15% is Nv getting you to part pay for longevity.

Higher vram might/won't get those cards out of jail going forwards, how anyone can be bullish that it won't when AMD will release more games requiring>10gb-I literally can't get my head round.

Moaning at AMD doesn't stop these cards running out of vram with plenty of grunt to run the better textures, but if Nv releases a 4070 with>10gb, the 70 and 80 are toast.

Going purely Nv head to head, Nv will 'break' them themselves-nothing to do with AMD so that you want, not need, want to upgrade.

Then it's back to availability, is the £400+ worth it for longevity, only those purchasing them can answer that as they don't all upgrade each gen.

It's bought to last for some, then take that thought down the tiers when 10+8gb is to last them- not everyone upgrades.
 
So, the 20-25% gets you higher fps or same fps with a couple of higher settings, thats what I said, for example to match the 80@60fps would be moving one rtx setting and one non rtx setting from high to medium takes you to the same fps.

Easier to notice when you have both for comparison, being in the position to having both cards, thats not much of a jump in my book, slightly better sums it up my end.

You and everyone else is going off of charts etc, so maybe harder to understand that with just two reduced settings can enable fps parity isn't a massive difference until that reality is sitting right there to see first hand.

A 70 to 80ti, that'd be what I'd call a decent but more importantly a noticeable jump imo.

It's the difference between 100 and 120-125 FPS for example. Almost a generational leap. I'm sure you would notice if we snuck that not so noticable difference out your pay packet :p

Moaning at AMD for adding high textures that bust 10gb never mind 8gb, while complaining AMD don't want higher RT effects because they play to their strenghts like Nv do with less vram use, higher RT'ing, I get the sentiment behind it but it's double standards mate.

Again you misunderstand. I'm saying AMD made use of more VRAM rather than streaming textures in properly. AMD was aware that Nvidia's top consumer card had 10GB therefore targeted a VRAM usage greater in both Godfall and FC6 in the hope they could win some marketshare. Do you think it was a simple coincidence that 12GB was required, while at the same time AMD were pushing 12/16GB cards?

Your waiting/waited for lovelace but decide no, I'm going to throw more than a grand at a 12gb Nv product, don't think so.

People don't see a 12GB Nvidia product, they see a new graphics card at the top of the performance charts with a greater feature set than the other guys. @Johnny Silverhand has his new 12gb Nv product.

The 3070 was running 1440p most of the time on settings within it's comfort zone, nothing but good things to say about it other than the stingy vram, just like the 3080, if you want future proofing it's over a grand.

I know my 3080 could do with a lot more horsepower at 1440p when I turn on RT.
 
I agree with most of that apart from 30% between the 70 and 80, 25% best case scenario but not much to disagree over.

The issue I have with these back of fag packet percentages is it varies from person to person. When the cards launched, it was the 3090 "is only 10%" better than the 3080. Then as games got benched or released the gap could be as large as 30% dependant on your resolution. Then the 3080Ti came to the fold and suddenly that is only 10% better than the 3080 which is impossible as it used to be the 3090 which still beats a 3080Ti. So again it is off the cuff statistics supremely generalised and make it sound like there is nothing in it. The same performance shenanigans can equally be applied to 6800XT vs 3080 when these percentages of "only" get used.

When it is within an actual game and represented using fps for example, that can be the difference of a smooth >60 compared to a low 50's.
 
It's a good thing there are many people with various cards on the internet too... 20% is a very worthwhile increase when talking about 40-70 fps range...





Also, good video showing all the 3080 models here:


Look at that whole 2GB extra of VRAM going to town.....

psjix5h.png

ceXjlFP.png

FGmP9k8.png

HnlVEsv.png

gx5s7Hw.png

AsuTQrN.png

;) :cry:

Definitely not a worthwhile/noticeable difference between the 3070 and 3080... :cry:

The difference going from a 3070 to a 3080 is far more worthwhile than it is going from a 3080 to a 3090 in todays games.... especially when you look at the FPS range of going from something like 40/50 (3070 perf) to 60/70+ (3080 perf) fps as opposed to going going from say 70 (3080 perf) fps to 85/90 (3090) fps. Of course as new more demanding games come out, a 3090 will last longer than a 3080 but by that time when you get to unplayable fps <60/70fps (subjective), new better and cheaper cards will be out....
 
Again you misunderstand. I'm saying AMD made use of more VRAM rather than streaming textures in properly. AMD was aware that Nvidia's top consumer card had 10GB therefore targeted a VRAM usage greater in both Godfall and FC6 in the hope they could win some marketshare. Do you think it was a simple coincidence that 12GB was required, while at the same time AMD were pushing 12/16GB cards?

I don't claim to know about writing games but isn't it possible that AMD and the game developers making use of more vram is a good thing, as with the use of SAM the PC can quickly access the information in vram rather than load it from elsewhere, eg page file or solid state drive. That AMD spent time thinking about increasing vram and using SAM is hopefully going to give more performance in an energy efficient way. If Nvidia had come up with SAM as well as DLSS I am sure we would be hearing more about it.

From this video it seems that loading assets into vram and properly using SAM gamers can get a nice benefit in at least some games, https://youtu.be/5_tPxv8DXd4?t=188

Certainly though I wish that game developers would design games with both high res texture packs and heavy ray tracing, with the option for gamers to properly scale each to fit their particular card.


https://youtu.be/Kmp_rW4cl38
 
Last edited:
I don't claim to know about writing games but isn't it possible that AMD and the game developers making use of more vram is a good thing, as with the use of SAM the PC can quickly access the information in vram rather than load it from elsewhere, eg page file or solid state drive. That AMD spent time thinking about increasing vram and using SAM is hopefully going to give more performance in an energy efficient way. If Nvidia had come up with SAM as well as DLSS I am sure we would be hearing more about it.

From this video it seems that loading assets into vram and properly using SAM gamers can get a nice benefit in at least some games, https://youtu.be/5_tPxv8DXd4?t=188

Certainly though I wish that game developers would design games with both high res texture packs and heavy ray tracing, with the option for gamers to properly scale each to fit their particular card.

I think you might be thinking more of direct storage.

SAM aka resizable bar just removes the temporary swap system/256MB buffer:

All gaming PCs produce an on-screen image by way of the CPU processing data – textures, shaders and the like – from the graphics card’s frame buffer. Usually the CPU can only access this buffer in 256MB read blocks, which obviously isn’t very much when modern GPUs regularly have 8GB of video memory or much, much more.

Resizable BAR essentially makes the entirety of the graphics frame buffer accessible to the CPU at once; where it could once sip, it now guzzles. The idea is that once textures, shaders and geometry are loading in faster, games should run faster with higher frame rates.

If any of that sounds familiar, it’s probably because AMD beat Nvidia to it with Smart Access Memory (SAM) in 2020. But branding aside, SAM and Resizable BAR are one and the same: it’s not an AMD or Nvidia technology, but one built into the PCIe interface, and that’s been lurking unused in the interface’s specs since PCIe 3.0.

SAM sees more of a benefit overall since amd have pretty much full control over bios/chipset drivers/interface and obviously gpu drivers work in conjunction with that.
 
I don't claim to know about writing games but isn't it possible that AMD and the game developers making use of more vram is a good thing, as with the use of SAM the PC can quickly access the information in vram rather than load it from elsewhere, eg page file or solid state drive. That AMD spent time thinking about increasing vram and using SAM is hopefully going to give more performance in an energy efficient way. If Nvidia had come up with SAM as well as DLSS I am sure we would be hearing more about it.

From this video it seems that loading assets into vram and properly using SAM gamers can get a nice benefit in at least some games, https://youtu.be/5_tPxv8DXd4?t=188

Certainly though I wish that game developers would design games with both high res texture packs and heavy ray tracing, with the option for gamers to properly scale each to fit their particular card.


https://youtu.be/Kmp_rW4cl38

Just to get you thinking - FC6's texture pack is 50GB in size. The VRAM requirement is 12GB.
 
Status
Not open for further replies.
Back
Top Bottom