• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
My bet is that it'll have more vRAM in line with with what the GPU can realistically make use of, that's how vRAM has always been placed on cards since the start. You put an appropriate amount on the card relative to serve the GPU for its needs. Which is dependent on how fast and capable the GPU is.
Same things were said for 1060. Everyone argued that realistically they would never be useful above 4 GB. Same for 8 GB Rx 580s.

They still perform okay enough for a lot of people, but most importantly, their VRAM is at a sweetspot (unlike what people believed back then). A 1060's VRAM will be maxed out by pretty much every AAA game as of 2019-2021, even if you put everything to medium. You can push Ultra textures regardless of other settings with both GPUs and have good looking games. Instead, you could have bought a 4 GB RX 580 variant or 3 GB 1060 variant by saying "these chips will not make use of 6-8 GB anyways" and make huge sacrifices on texture quality to make the game barely playable with inconsistent frametimes due to games requiring 5.5+ GB VRAM even at 1080p as of 2020.

Funny thing is, these RTX GPUs have DLSS in their arsenal. You can make RTX 3090 store 8k-16k textures and render at 1440p, 4K and upscale with DLSS and have gorgeous graphics. So yes, any RTX GPU can make use of 16 GB. Any GPU can always make use of more VRAM. This was the case for GTX 770. Look how performant 4 GB 770 is and how worse and bad 2 GB 770 looks. Do you think Nvidia really believed that 2 GB was all 770 can make use of?

As I've said countless times, there's no game as of now that introduces HIGH quality, generation defining texture packs in their arsenal. But there will be, because some developers will want to make USE of higher VRAM GPUs. And if it turns out they truly change the landscape of a game's graphical fidelity, you will be phased out of those textures not because of your chip's power, but because of VRAM.

A rtx 3070 is able to push 4k 70 fps with rt enabled in RE:Village. Yet it can't, instead, it drops to 40s. This alone proves that 8 GB is not a good "amount" for the power level of a 3070/2080Ti. 10 GB would be better, but 12 GB would be ideal. Same is the case with 3080. Simple, it should've been 16 GB.
 
planned obsolescence to get everyone upgrading in a couple years, AMD users might win this one in the long run, for anyone that upgrades every year/2 years shouldnt make any difference I guess.
 
planned obsolescence to get everyone upgrading in a couple years, AMD users might win this one in the long run, for anyone that upgrades every year/2 years shouldnt make any difference I guess.


Precisely so

fast forward 1 to 2 years from now the 3080 will be getting rampaged by current higher vram cards and amd cards are perfectly poised to do just that

but I think most people know that, so most rtx3080 owners are fully aware that they will have to upgrade again in 12 months even if they try to deny it on these forums they know the truth
 
Precisely so

fast forward 1 to 2 years from now the 3080 will be getting rampaged by current higher vram cards and amd cards are perfectly poised to do just that

but I think most people know that, so most rtx3080 owners are fully aware that they will have to upgrade again in 12 months even if they try to deny it on these forums they know the truth
This looks like a bit of the case of Ryzens though. Everytime people bought a high-end Zen, they probably thought "this cpu gonna take me through the generation!"

We know the Zen+ was a disaster, Zen 2 was marketed as a success, but it was not in my eyes (%10-15 uplift compared to Zen+, minor latency improvements, but still the dreaded CCX design), and rightfully so, it still got beaten by 3-4 year old 8700k in games.

https://youtu.be/Em_w50YOyms?t=292

Just look at this. This is literally a coffee lake refresh chip that runs at freaking 4 ghz against the OC'ed 3600 (3600xt).

Supposedly Zen 2 had "higher IPC" than Coffee Lake. Yet there was still the weird "infinity fabric" gimmick behind it

We can see how good "Zen 2" was. They're simply not good enough for Ampere/RDNA2. That is a bit shameful, I'm pretty sure a 4.8 Ghz 8700k would feed an Ampere GPU/RDNA 2 GPu as much as a 3700x. This is evidenced by the amount of people that are upgrading from Zen 2 to Zen 3, the required performance is simply not there... but was never there to begin with. 8700k in 2017 however provided potential Zen 2 levels of performance. .

In the end, Zen 3 has truly changed the game. I believe Zen 3 is really worth for what it brings, but anything before it feels like funding for AMD to help them make actual, proper gaming CPUs XD
 
Precisely so

fast forward 1 to 2 years from now the 3080 will be getting rampaged by current higher vram cards and amd cards are perfectly poised to do just that

but I think most people know that, so most rtx3080 owners are fully aware that they will have to upgrade again in 12 months even if they try to deny it on these forums they know the truth

Will largely depend on what the card is used for. I’ll definitely get a couple years out of mine for 1440p gaming, but people who use it for 4K might struggle but it’s a budget 4K card so that’s to be expected.

Once games start using newer engines like UE5, then we might start seeing vram limitations but by that point the current cards will be pretty useless anyway.
 
Precisely so

fast forward 1 to 2 years from now the 3080 will be getting rampaged by current higher vram cards and amd cards are perfectly poised to do just that

but I think most people know that, so most rtx3080 owners are fully aware that they will have to upgrade again in 12 months even if they try to deny it on these forums they know the truth

UE5 seems to be doing quite well with its usage and that's before optimisation. What do you see using more VRAM in the next 1-2 years?

What do you consider the average amount of VRAM to be today and why would it explode in the next 1-2 years? There are a lot of people still using 1060 level hardware according to Steam's hardware survey.

I do expect to upgrade to Lovelace/RDNA3 due to lack of GPU grunt mostly centred around raytracing. That also means that the 3090 owners will also have to upgrade, which is why I settled for a £720 3080 ;)

Sadly, RDNA2 cards are already lacking today due to the budgets placed by consoles. Lumen running software rendering looks good, but can't touch hardware raytracing. And of course there is the Tensor backed DLSS that RDNA2 has still to compete with as we approach 8months since launch.
 
Last edited:
The last 3 comments (oguzsoso, Cereal and Grim 5) spot on. Only fan boys and the eternally hopeful could argue tbh.

This is not a generation of upward jumps like Pascal was. RT/DLSS are great if they're good for your needs but that flagship 3080... it loses more from 10Gb than it gains in those, certainly over time. Lower down the stack isn't as doubtful for the long term if those cards are bought for the resolutions they were hyped/meant for (though still, 3080 should've been 12+Gb, 3070 10Gb at least imo)
Next gen up should (if sense prevails) see better in both VRAM:expectations and RT/DLSS better and most importantly more widespread in support too. That's when I'd be interested in RT, when there's enough ubiquity to be indispensable for the average games library (as opposed to only some of the big hit best selling AAA games as it is rn) but ideally AMD will be trading blows even more closely by then anyway. We know they can do it on raw hp now, next step is of closing up the gap in RT/DLSS... a harder trick to follow but I think they can if recent news is anything to go by. Again, my 1070 lasted the course (60+ fps ultra at 1080 ultrawide, then 60+ fps ultra 1080p) for 56 months very well tbh, much better than I believe a 3080 can or will for even half as long.
 
Or 7 months. :p
Oi. I sold it because one I completed all the graphically demanding games that I wanted to play and two because I got £1615 for it ;)

Next game coming out that I want to play is Dying Light 2 and that is in around 6 months from now. If I fancy it I can just monitor the bot I did last time and get another one for £649. But I likely won’t bother.
 
This looks like a bit of the case of Ryzens though. Everytime people bought a high-end Zen, they probably thought "this cpu gonna take me through the generation!"

We know the Zen+ was a disaster, Zen 2 was marketed as a success, but it was not in my eyes (%10-15 uplift compared to Zen+, minor latency improvements, but still the dreaded CCX design), and rightfully so, it still got beaten by 3-4 year old 8700k in games.

https://youtu.be/Em_w50YOyms?t=292

Just look at this. This is literally a coffee lake refresh chip that runs at freaking 4 ghz against the OC'ed 3600 (3600xt).

Supposedly Zen 2 had "higher IPC" than Coffee Lake. Yet there was still the weird "infinity fabric" gimmick behind it

We can see how good "Zen 2" was. They're simply not good enough for Ampere/RDNA2. That is a bit shameful, I'm pretty sure a 4.8 Ghz 8700k would feed an Ampere GPU/RDNA 2 GPu as much as a 3700x. This is evidenced by the amount of people that are upgrading from Zen 2 to Zen 3, the required performance is simply not there... but was never there to begin with. 8700k in 2017 however provided potential Zen 2 levels of performance. .

In the end, Zen 3 has truly changed the game. I believe Zen 3 is really worth for what it brings, but anything before it feels like funding for AMD to help them make actual, proper gaming CPUs XD

IF is not a gimmick - its basically the high bandwidth connection between the various parts of the CPU or GPU. So in Zen3 you have IF links which connect the chiplets with the I/O die,and its used in Epyc to connect all the seperate chiplets to each other. Its also used internally in AMD GPUs:
https://www.anandtech.com/show/1559...hitecture-connecting-everything-to-everything

Every major CPU/GPU maker has investments in various connectivity technologies - IF is just the one AMD is using. Intel had Omni-Path and CXL,Nvidia has NVLink,Fujitsu has Tofu,etc.

Also,a slight issue with your analysis. The XBox Series X,XBox Series S and PS5 all use the equivalent of the CPU section of a Renoir APU. That means 8MB L3 cache and a dual CCX design. So as time progresses,literally all the multi-platform games are going to be developed on dev kits using Zen2 CPUs. So any developer who wants to get the most out of a console will have to get over the CCX-CCX latency issues.

The big issue is the Core i7 8700K/Core i5 10600K costed as much as a Ryzen 7 3700X,so that is what was competing with them. At launch the Ryzen 5 3600 it was competing with the Core i5 9600K/Core i5 9400. Even if the Intel CPUs have better gaming related latency,they are hampered by less cores/threads and this will become more of an issue. The new UE5 demo apparently needs a 12C CPU for optimal performance! Remember,what happened to the Core i5 7600K?? It thrashed a Ryzen 5 1600,and within 3 years,its not doing so hot and that was a generation using Jaguar cores. Intel tiering everything,especially SMT, gave AMD room to manoeuvre.
 
Last edited:
This looks like a bit of the case of Ryzens though. Everytime people bought a high-end Zen, they probably thought "this cpu gonna take me through the generation!"
That comparison doesn't really make any sense at all. The first three generations of Ryzen were always slower for gaming than Intel. There was never a time when people thought they were faster and were suddenly taken by surprise later. RDNA2 is as good as anything on the market right now. If it becomes obsolete, all graphics cards in existence will be obsolete with it. But then that seems incredibly unlikely for a long, long time to come, unless you're one of those people who absolutely has to have every setting cranked to its max. Ampere and RDNA2 will be fine throughout this entire generation of consoles with some compromise on settings. People are still playing the latest games quite happily on cards from 6-7 years ago. A 980 Ti will still get you 1080p/60 in anything out there at pretty high settings.
 
That comparison doesn't really make any sense at all. The first three generations of Ryzen were always slower for gaming than Intel. There was never a time when people thought they were faster and were suddenly taken by surprise later.

Agreed. Until Zen3,Intel was ahead in gaming,but Intel couldn't get over its tierisation so left AMD room to compete elsewhere.

The Ryzen 5 3600 was cheaper than a Core i5 9600K and had HT. The Ryzen 7 3700X was the same price as the Core i7 9700K and had SMT,and a fantastic stock cooler. By the time the Core i5 10600K launched,the Ryzen 7 3700X was the same price. People bought them because they gained SMT/more cores over similarly priced Intel CPUs. Plus AMD enabled overclocking and RAM tweaking on their B350/B450 motherboards,and with Intel you had to go with pricier Z series motherboards. AMD even had better stock coolers too - the Wraith Prism RGB on the Ryzen 7 3700X was nearly the same level as a Hyper 212. With the Core i7 9700K/Core i5 10600K you had no cooler.

Even the Core i5 10400 was initially a damp squib,because the B460 chipset meant RAM was locked to 2666MHZ severely hampering performance. Its only come alive in the last few months due to the B560 chipset,and price cuts.
 
Last edited:
IF is not a gimmick - its basically the high bandwidth connection between the various parts of the CPU or GPU. So in Zen3 you have IF links which connect the chiplets with the I/O die,and its used in Epyc to connect all the seperate chiplets to each other. Its also used internally in AMD GPUs:
https://www.anandtech.com/show/1559...hitecture-connecting-everything-to-everything

Every major CPU/GPU maker has investments in various connectivity technologies - IF is just the one AMD is using. Intel had Omni-Path and CXL,Nvidia has NVLink,Fujitsu has Tofu,etc.

Also,a slight issue with your analysis. The XBox Series X,XBox Series S and PS5 all use the equivalent of the CPU section of a Renoir APU. That means 8MB L3 cache and a dual CCX design. So as time progresses,literally all the multi-platform games are going to be developed on dev kits using Zen2 CPUs. So any developer who wants to get the most out of a console will have to get over the CCX-CCX latency issues.
.


Thank you for your insights, I stand corrected!
 
Status
Not open for further replies.
Back
Top Bottom