• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
Associate
Joined
9 May 2007
Posts
1,284
This is all wrapped up into a dynamic LOD system which they talk about in the tech preview of the Unreal5 engine here https://youtu.be/d1ZnM7CH-v4?t=410 at 6:50 where assets are loaded/unloaded as necessary.

Nanite uses a dynamic LOD system that aggressively adjusts the quality of the assets given your distance from them. I was learning about this recently, there's a good breakdown of how cluster culling works here with an awesome visual illustration of what is going on https://youtu.be/P65cADzsP8Q?t=98 at 1:38

We also know that for the PS5s super fast custom drive they actually overhauled the I/O for the engine because it was basically old and limited, and so that'll be something they look to push on the PC through DirectStorage to make streaming that content faster and so more seamless. The faster you can do it the more tight you can be with with the LODs.

People believe that this will reduce the amount of textures that are buffered before use. That it will stop large amounts of texture simply being stored in memory. Even if that is true developers can simply use whatever new resources they have. If some cards have 16GB or 24GB. Then they can add higher resolution textures to the game. How much this improves image quality is another question. The truth is the game has to work on the most common gpu's at the end of the day which means 10GB is enough in the same way 8GB is enough.

Windows 11 is quite nice, but TPM is a requirement. Many will still be on Windows 10 were there is no DirectStorage support. PC's will also have lots of different NVMe drive with different performance characteristics. Developers most likely will have to make the new games run of the most common hardware. This likely means no DirectStorage support, except at the high end. Very few people have the latest and greatest hardware.

Only a possible problem if you have a 4k monitor though. It will still be a niche monitor in a few years time. 1080p is still the king. So only an issue for a minority ? :)

For RT most people will have a NVIDIA GeForce RTX 2060 if stream hardware survey is to be believed. Only 6GB of vram and we are worrying about 10GB being enough. RTX 2060 is great at 1080p. https://store.steampowered.com/hwsurvey/videocard/
 
Last edited:
Associate
Joined
1 Oct 2009
Posts
1,033
Location
Norwich, UK
@PrincessFrosty
Your results would depend on what section of the game were you in.
Resolution can also change other internal settings. e.g. low at 1080p may not be the same as low at 4k
A 4k image with a bit depth of 24 is about 23 MB. Even with multiple frame buffers you would have a hard time reaching the value you have there. :|

I think 1/4 usage is probably on the low end and not the standard for a majority of game.

Not a much as you'd think. Textures for example differ from place to place but today setting the quality often refers more to setting the reserved memory for the texture pool and then letting the engine work out the quality of the textures that can fit at runtime. This is precisely what Doom Eternal is doing and why I mentioned that as well, because you can read out the texture pool size from the console command to see what each setting actually gives you, and they're just static values. During testing there's no noticeable benefit above "high" which uses 2048 MB for the texture pool, with the settings maxed in 4k using a bit below 8GB of vRAM total so again it's about 1/4

I can't remember what it was for GTAV and CoD when I last checked but it's pretty similar, these aren't odd ones out.

You can get any value on VRAM you like really because games are designed to live within hardware limits. Most GPU's are going to be 6-8GB, so all games for now will live within these limits. New console around 10 GB. Given these limits a 10GB VRAM size will have no issues running new or future games for some time.

And the odd thing with the consoles is that at least with WD:L when they had the option to run with a high res texture pack that the PC has, they decided against it. I'd be very interested to know why and if that's going to be indicative of what's to come. I did make a prediction way back at the start of all this that consoles due to their APU having kind of middle of the road performance may simply not be able to make good use of all 10Gb for graphics purposes. You have a GPU which is probably about a 2060 or slower which shipped with 6Gb being compared to 10Gb...so to me that seems to fit what I've been saying. But maybe WDL is an exception and other games will ship with textures that are the same res as the PC.
 
Last edited:
Permabanned
Joined
21 Feb 2021
Posts
474
for all we know, ampere may end up like kepler, which will be the most brutal scenario

a gtx 780 became obsolete in 3 years, it was released in 2013, and in 2016, the 1050ti surpassed it in every way; vram, pure power, efficiency... in just 3 years, the mighty highend card became the lowest end card lmao... compare that to the longevity of beast cards such as gtx980 and 1080 and it becomes pretty funny to compare and contrast. kepler was literally a huge scam that nvidia pulled on people.
 
Soldato
Joined
30 Mar 2010
Posts
13,058
Location
Under The Stairs!
Nv championed 3gb was plenty when the 290*s arrived, then they released the 6gb 780 as their sponsored Metro title needed >3gb, you can depend on Nv stabbing you in the ribs when vram needs to increase.

The clocks ticking for when the slower at 4K 6800 overtakes the 3080.

Histotically the (sometimes slower)larger vram direct competitor (none of this 3060 12gb grunt pish comparison to make you feel better) gpu's always overtakes in the end, matter of when not if:


3gb 7950>2gb 680
4gb 290/6gb 780>780Ti

It's still going to run all the way to 10gb but the slightly slower higher specd gpu in this example will overtake in the end, while both the cards still very relevant.
 
Soldato
Joined
19 Sep 2009
Posts
2,747
Location
Riedquat system
for all we know, ampere may end up like kepler, which will be the most brutal scenario

a gtx 780 became obsolete in 3 years, it was released in 2013, and in 2016, the 1050ti surpassed it in every way; vram, pure power, efficiency... in just 3 years, the mighty highend card became the lowest end card lmao... compare that to the longevity of beast cards such as gtx980 and 1080 and it becomes pretty funny to compare and contrast. kepler was literally a huge scam that nvidia pulled on people.

seems like a good thing to me if theres a cheap xx50 ti that comes out a couple of years that beats a 3080 tbh

Edit: And I would never regard the 980 and 1080 as beast cards - both seemed overpriced to me!
 
Soldato
Joined
26 Oct 2013
Posts
4,023
Location
Scotland
seems like a good thing to me if theres a cheap xx50 ti that comes out a couple of years that beats a 3080 tbh

Edit: And I would never regard the 980 and 1080 as beast cards - both seemed overpriced to me!

That would depend why it beats the 3080. If it beats the 3080 in terms of raw performance that is great but if it's because the 3080 starts tanking due to a lack of VRAM then it's not good.
 
Permabanned
Joined
21 Feb 2021
Posts
474
seems like a good thing to me if theres a cheap xx50 ti that comes out a couple of years that beats a 3080 tbh

Edit: And I would never regard the 980 and 1080 as beast cards - both seemed overpriced to me!

as the sargatanas said, the reason is important

kepler aged bad because of its weird gimmicky architecture... not because of actual performance improvements. in terms of pure performance, 1050ti shouldn't have even matched a gtx 770. but it could beat 780 in some circumstances, which makes the all situation weird

kepler needed special software support to keep its relative performance, and once NV stopped doing that, performance tanked in newer games relative to all other GPUs

no one could've known that kepler relied on software tweaks per-game to provide best possible performance. did nvidia tell their customers about this when they were selling the gpus? i guess not...

maxwell and pascall is not reliant on software as much as kepler did.

a 980ti still equals to a 1080... and a 1080 still performs near a 2060, even in new games (there are certain oddities and exceptions). but the general rule is, even if nvidia do not provide any "special" software support to maxwell and pascal, these two gpu families can still provide you fair and good performance.

https://www.guru3d.com/articles_pag..._graphics_performance_benchmark_review,4.html

take this game for an example. 1080ti/1080 still performs crazy good. 1080ti is actually near a 3060. that's two generations apart. pascal does not need special "code paths" like kepler did. this is apparent.

this whole situation makes the maxwell/pascal a miles better buy than kepler.

imagine if pascal was like the kepler, do you think 1080ti/1080 would be so high on that chart? you could easily cut off a nice %20-30 performance of off that chart for these gpus if not for that
 
Soldato
Joined
30 Dec 2011
Posts
5,449
Location
Belfast
seems like a good thing to me if theres a cheap xx50 ti that comes out a couple of years that beats a 3080 tbh

Edit: And I would never regard the 980 and 1080 as beast cards - both seemed overpriced to me!

I think his point was that a near top end GPU should not be built with a 3-4 year life cycle at best. The 780 became obsolete due to VRAM issues more than sheer GPU grunt problems.

Here is a review of the 290X vs 780Ti but the premise of 3GB vs 4GB would apply to the 290 vs 780 as well. It can be seen that the 4GB 290X is a far more viable for new games (as of 2018 when both GPUs were 5 years old) compared to the 3GB 780Ti.

https://babeltechreviews.com/the-retro-series-the-r9-290x-vs-the-gtx-780-ti/view-all/
 
Soldato
Joined
15 Oct 2019
Posts
11,698
Location
Uk
Pipedream
All of these new Gpus struggle with 4K 60 right now so whichever card you buy you will need to be turning down settings in a couple of years time. This is why I will not be entertaining a 4K screen until cards can reliably hit 120fps on new AAA games which hopefully the shift to MCM will be able to deliver.
 
Permabanned
Joined
21 Feb 2021
Posts
474
I think his point was that a near top end GPU should not be built with a 3-4 year life cycle at best. The 780 became obsolete due to VRAM issues more than sheer GPU grunt problems.

Here is a review of the 290X vs 780Ti but the premise of 3GB vs 4GB would apply to the 290 vs 780 as well. It can be seen that the 4GB 290X is a far more viable for new games (as of 2018 when both GPUs wer 5 years old) compared to the 3GB 780Ti.

https://babeltechreviews.com/the-retro-series-the-r9-290x-vs-the-gtx-780-ti/view-all/

no, not necessarily vram

https://youtu.be/sHIPq02yuEs?t=410

this is clearly not a vram issue. both cards top out at 3 gb of consumption, yet 1050ti destroys the 780

in gta 5, 780 is %20-25 faster than 1050ti

kepler aged bad because

- no proper dx12 support
- weird architecture that relied on nvidia's software prowess to shine (and it did shine. just see old games and their benchmarks. a gtx 780 can easily squueze between a 970 and 960 in older games where kepler optimization is present)

being derailed from console architectures seem to have this effect. this is just a pure speculation on my end, but be it 16 gb or 10 gb, or 12 gb, ampere may live the same fate.

all of a sudden, they invented this new stuff of having 2x cuda cores and numbers like 20 tflops and 30 tflops started to fly around. i wonder what kind of under the hood optimizations nvidia is doing to keep ampere performant? we may never know until that happens. all of a sudden you may see a "console like" architecture hail from nvidia again (like they did on maxwell after kepler)
 
Soldato
Joined
19 Sep 2009
Posts
2,747
Location
Riedquat system
I did not have a Kepler card (went from fermi to maxwell) but my maxwell card was pretty poor at DX12 and was better off using the DX11 option if available. And even on pascal it did not see the same gains as AMD's architectures. 780 looks like its doing pretty well there given its age :p
 
Permabanned
Joined
31 Aug 2013
Posts
3,364
Location
Scotland
Yep streaming can help in certain situations, but we still need large amounts of Vram.

Here is an answer to how Vram is allocated in a typical game.

Check this out:
http://katmai3.co.uk/1/gameprofile.mp4

Download the video or watch it if your connection is fast enough.
This is me profiling a typical game world level live, look at about half way through the video I use the profiler then after that I take a memory snapshot all while the game is running live.
The memory snapshot shows what's in Vram and what takes the most of the memory.
As you can see the 2D texture maps (with mipmaps etc) take a huge chunk of the Vram, then meshes. the 179 shaders Don't take much Vram.

For people claiming I will turn down a few graphical effect to cut Vram usage, it Won't work! You have to turn off hires textures so you go from 4k tex (or a mix of 4k/2k) to no 4k tex = more blurry if viewed on a 4k monitor!

The caches in there using Vram are not significant and are dynamically resized or made very small if a programmer wishes to almost totally fill up Vram.

NOTES
--------
I had to drop down to 1080p screen to record this due to file size so it looks blurry!
The level was created by an industry pro artist so all the proper LOD's, mipmaps, texture compression are in there, this was about 5 years ago so todays games will have much more large textures.
Its mostly 2k (2048*2048) textures with a few 4k tex, I have put in a few 4k tex which are used for the female soldier and left them Uncompressed so you can see the size in Vram.
Some larger meshes were added by me so triangle counts and Vram use would be a bit less for a game from 5 years ago but not for todays games.
The 179 shaders are a huge amount such as ambient occlusion, anti aliasing, particle effects, terrain shaders etc but ONLY use 127 MB of Vram so people claiming to turn down a few graphical effects to
reduce Vram usage won't be able to.

Get a card with the most Vram you can afford so it lasts a bit longer.

We know what VRAM is used for, well some of us do.

You seem to be ignoring the fact that most GPUs are 10GB or less and that consoles will target 10GB, yet you claim people should get the card with the most VRAM. I'm gussing by that you mean RNDA2, which is already a lost cause due to poor raytracing performance. This makes no sense at all.

What size of market would a title requiring 16GB of VRAM reach compared to one that targets 8 and 10GB?
 
Soldato
Joined
24 Sep 2013
Posts
2,890
Location
Exmouth, Devon
All of these new Gpus struggle with 4K 60 right now so whichever card you buy you will need to be turning down settings in a couple of years time. This is why I will not be entertaining a 4K screen until cards can reliably hit 120fps on new AAA games which hopefully the shift to MCM will be able to deliver.


Not completely true. In the reviews, to be ale to make apples to apples comparison they turn EVERYTHING to ultra. Many graphical settings are there to make 1080p look better, as still 90% of people game at that res. So 4k gamers will lower these freeing up GPU horsepower. If you go 4k and turn every setting up to max or ultra then you are doing it wrong. Plus most AAA games that come out these days are beta versions at best. Then you have games that beat all that, such as PUBG which was a community mod. So AAA from a big game developers are usually poor on release. Then after all that you have DLSS 2 which works very well in some games as will FSR. If there was any time to move to 4k with 6800/3080 and above the time is now.
 
Soldato
Joined
15 Oct 2019
Posts
11,698
Location
Uk
Not completely true. In the reviews, to be ale to make apples to apples comparison they turn EVERYTHING to ultra. Many graphical settings are there to make 1080p look better, as still 90% of people game at that res. So 4k gamers will lower these freeing up GPU horsepower. If you go 4k and turn every setting up to max or ultra then you are doing it wrong. Plus most AAA games that come out these days are beta versions at best. Then you have games that beat all that, such as PUBG which was a community mod. So AAA from a big game developers are usually poor on release. Then after all that you have DLSS 2 which works very well in some games as will FSR. If there was any time to move to 4k with 6800/3080 and above the time is now.
I shall wait till they hit 120fps at 4K ultra then that way the GPU should last 4 years rather than buying a GPU and having to start turning down settings right away and many more in 2 years time. These cards are not cheap especially if you need to upgrade every 2 years which will be likely on all the high end cards this generation regardless of VRAM quantities.
 
Status
Not open for further replies.
Back
Top Bottom