• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
I'll only be playing in 1440p.. so I hope that 10gb will be enough for the next 3/4 years... as much as I'd like to get a 3080 20gb, I think it'll cost significantly more than the 10gb version and I wouldn't be willing to pay more than £700 for a GPU.
 
Also, good call above on the FuryX, was trying to remember which one it was. Wasn’t it the 4GB of HMB memory that was supposed to have magic properties. All the other cards were releasing with 6GB+ and it all turned out to be ********* did it not?

Yeah. I had learned the lesson years earlier too ! Not on the Fury X, on the GTX 470. OCUK had a load of cheap cards (Point Of View, IIRC) and I snagged a brand new 470 for £160. Which was less than half of what it launched at about two weeks earlier. God bless OCUK.

Any way, I slapped a Zalman V3000F on it (the green one, Zotac used it on their Amp! cards way back when) and I was super happy. It clocked like a beast and was a great card. And then one day Battlefield 3 came out. Maybe about a year into owning the card? at that time I had no idea about the importance of VRAM and nor how much I was even using. I simply didn't care. So I am getting right into BF3. Loving it, like. I got to the stage where you are in the shopping mall with the sniper rifle. Terrorists rush in and you have to take them out quickly before they get up the stairs and kill the bloke you are trying to keep alive. However, it used to pause and stutter and basically I could not even complete the level. I tried turning settings down but it still did the same thing.

So to try and find out what was going on I used that search engine people use and typed in "Battlefield 3 stuttering badly". I was then directed to an Nvidia page where they explained "texture streaming" and what it was and what it did. Basically when your card does not have enough VRAM it loads stuff into your paging file (textures, maps etc) and then basically "streams" the textures into the GPU. Kinda like you would stream a movie over the network or what not. Only in practice it is absolutely appalling. Your FPS drops by about 90%.

The fix? I went and bought one of the cheapo 6970 Lightning rejects OCUK had for £160 and I flew through that level like poop through an old lady with IBS.

Now this has changed slightly over the years. Instead of doing that with your paging file (even a SSD didn't help, BTW, because I was running a Corsair X32 at the time !) it does it with your physical RAM. However, as we all know physical RAM is waaaaaay slower than VRAM.

Now these are old cases of this, and you would think it has been fixed now right? it hasn't.

Two years ago when the 20 series launched the 2080 and Ti were labelled as 4k cards. And they were in fairness. However, this test was done what? two weeks ago? and already the 2080 does not have enough VRAM.

LJoSGv3.jpg

Note how until up to that point the 2080 was seriously kicking the 1080Ti's ass. However, once the VRAM cap was reached and it started reaching to the memory for textures? the performance falls off a cliff.

So why does this happen? it's Nvidia's way of keeping you coming back for more. It's a great way to artificially retire a card long before it should.

Ampere had to be many things, and "a decent price" was one of them. Mostly because Nvidia don't have the arena to themselves this time around, and have basically every single combatant in the arena. Both of the new consoles are coming *and* big Navi. So they had to find a way to get a card out there that looks cheap. That's your 10gb 3080.

If you buy a new GPU every round? you might just get away with it. However, given that a dev cycle into production takes around two years? well let's go back to that 2080 that has already run short of VRAM.

Don't underestimate it mate is what I am saying. If you see your GPU as a long term investment and don't replace it every time something new launches? then either wait for the 20gb (which yeah is overkill but too much is always enough !) or see what AMD bring to the table.

Right now Nvidia can't even use the fastest of the new GDDR6X because it would push the TGP too high. However, once they can get Ampere to behave just a little bit better (which should come in time) they will no doubt use that faster VRAM making even faster cards.
 
Thanks for the explanation. All of this is sound reasoning. Unfortunately, it won't take for some, IMO, until Nvidia releases those 16/20 cards.

It'd have to be a paper launch though as they can't even match demand for the 10gb versions never mind 20 currently. They probably think it'll be enough for people to hold off buying the AMD though and much as I hate to say it they'd probably be right
 
But the GDDR6X memory is much faster so presumably can swap in and out the data. Also once this RTX IO/Direct Storage thing gets implemented perhaps that will help as well.

I do see how 10GB is probably the minimum amount of VRAM they could have put on the card, but the next gen consoles will not have access to a full 16GB for VRAM. The OS and the game will use some of this just to power the box and run the game.

So perhaps if we give the console the benefit of the doubt perhaps it will have 12GB of VRAM available to it to play with for games.

And do we really think the consoles OS + game ram only uses up 4GB...?

Games these days on the PC use around 8GB ram, Windows is around 4GB ram and then Vram is around 5-8GB generally.
 
Excellent. Now all those whinging about 10gb not being enough can buy that :D

I can see it now, in 12-18 months there will be one or two games worth buying that will not work on maximum texture settings with 10gb and you will get them saying ahaaa! See told you 10gb is not enough, to which I will respond with that's ok mate, I will be upgrading to a 4070/80 soon, in the mean time I will use one texture setting lower in those games :p:D
 
If anyone is interested my testing of allocated vs actual vram used with my RTX 3080 in a new game - Deliver Us the Moon.
4K with no RTX or DLSS 2.0 3.6gb vs 2.7gb
4k with RTX but no DLSS 2.0 5gb vs 4.1gb
4k with RTX and DLSS 2.0 Performance 3.4Gb vs 2.5GB
4k with RTX and DLSS 2.0 Balanced 3.6GB vs 2.7gb
4k with RTX and DLSS 2.0 Quality 3.8 vs 2.9 GB
 
If anyone is interested my testing of allocated vs actual vram used with my RTX 3080 in a new game - Deliver Us the Moon.
4K with no RTX or DLSS 2.0 3.6gb vs 2.7gb
4k with RTX but no DLSS 2.0 5gb vs 4.1gb
4k with RTX and DLSS 2.0 Performance 3.4Gb vs 2.5GB
4k with RTX and DLSS 2.0 Balanced 3.6GB vs 2.7gb
4k with RTX and DLSS 2.0 Quality 3.8 vs 2.9 GB

now test doom eternal with ultra nightmare textures
 
I havn't yet played all the maps on FS2020 but i did have an extremely interesting Map sent to me during testing. Up until the point i was sent this Map, 9.5GB was roughly around the most Vram used on my 1080ti. I was sent a Map that was DX12 enabled, i know it was because all 32 threads of my 3950x were used as was just over 26GB of system ram as well. The interesting thing though was the amount of Vram used...........all 11GB. As yet FS2020 isn't DX12 enabled, but it will be at some point that's for sure. I'm speculating that a DX12 enabled FS2020 will use as much Vram as a card has. If my 1080ti's Vram was maxed out, i'm guessing that a card with 20GB of Vram could certainly be using most of it.
Out of curiosity what was the performance like?
 
Anyone at this point who still thinks that 10GB Vram is going to be an issue on the 3080 cards needs to go Google a bit more and do their homework, it's not a problem and your an idiot if you believe it is as you lack the basic understanding of how Vram especially 6x works.
 
The extra memory allocation comes from developers writing the engine so it reserves an estimated block of memory in vRAM which is larger that what it knows it needs, and then the game engine itself interneally managed what is put into that vRAM so it's abstracted away from the hardware. The engine just see's a big list of memory addresses it can use that don't conflict with other processes using the GPU, and from the GPUs standpoint once it's assigned to an app it's "in use" and unavailable to any other process unless later released.
As I understand it ...

With a high level API (OpenGL & DirectX 11) the developer has limited control over how much GPU memory will actually be used. You might ask for the memory for a texture of a particular size, the drivers might just allocate extra for their behind the scenes work and that amount could change with the users control panel settings too. Overall, more VRAM would be allocated than actually used anyway. The developer might still be able to tally the memory they explicitly allocated, but as users we can basically only see the total allocated by application + OS + drivers.

There might also be caching involved at the driver level, as it very much seems like a GPU with more memory will use more memory even at the same settings; it still doesn't mean that VRAM is needed.

With a low level API the developer has to be explicit about everything as you do your own memory management. You wouldn't just know how much memory your textures need, but all your buffers etc, and you can present these figures more directly to the user.

@ Both of you
I understand that they allocate memory in chunks, however in the FS2020 example it is an extra 32%. That seems way too excessive to not be intentional.
Are they not able to dynamically change the memory allocation or do they have to allocate it all up front? I am assuming that it is dynamic (up to a certain point).

This extra memory that is allocated is their a perfomance hit if it cannot be allocated? for example with FS2020. Lets say you have two identical GPUs but one with 10GB of VRAM (enough for the 9.5GB that is used) and the other GPU with 13GB of VRAM (to hold the extra 3GB of bloat). Would these two GPUs have identical performance?


But the fact is no one other than the engine engineers really know how this works deep down, lots of trade secrets I'm sure, even the developers don't really know, the game devs have abstracted tools to allow them to zone and put in loading/streaming areas, but almost certainly have no idea what the engine is actually doing in vRAM. Point is that vRAM usage and game assets kinda just became decoupled, and more and more of that vRAM is now being dedicated to purely what the GPU needs to render the next frame, the better that prediction gets the more vRAM is spent on that, rather than a large dumb cache. iDTech5+ made use of this last gen and the next gen console (and microsoft DirectStorage) will continue to capitalize on this into the next generation (in the case of Nvidia they've integrated this as RTX IO)

Do the engine engineers and the game devs not talk to each other. It seems to me that it would be in everyones best interest to ensure optimum performance of games (Ignoring the time and budget constraints). This sounds like an inefficient way to collaborate. I really hope that this isn't standard practise in the industry.

On the FS2020, the benchmarks were pulled from generic list of benchmarks so you'd assume they're representative but I don't know that for sure. But what I do know is that if it's not, you fly over some area that hypothetically needs 14Gb of vRAM then the vRAM wont crap out first, the GPU will, it'll choke trying to provide you with a fast enough playable frame rate.

Why do you know this? Based on what?
Define playable? 60fps or 30fps?
While it will take more processing power to handle the extra data. How do you know that the reduction in frame rate from the increased demand on the GPU is greater than the reduction in frame rate from running out of VRAM?
 
From a quick google search the difference is in texture pop in and changes to how the engine handles LOD.
Do you mean those 4 year old posts from reddit? :p

Talking about LOD. I came accross this video. Skip to 9 mins
Yeah, that is interesting. I do not play that kind of game anyway and I do wonder if in the future they will start making use of 20gb+ of vram to load in higher LOD's and not bother to transition.

The truth is however in most cases they do not bother putting in any work to cater for the 1% of people who would have such cards.

I still maintain that 10gb will be fine for the most part even at 4K for the next couple of years. From what I can see it is enough for every single game out today and between now and the release of next gen it may not be enough on maybe a handful of games to run at highest texture setting. To me that is not a big deal personally, as there is negligible difference between the highest and one lower texture anyways generally speaking. I ain't paying hundreds of extra quid for that on a card I do not plan on keeping for a long time :p:D

Put it this way, if there was a huge difference in the two highest texture settings at 4K you can bet your bottom dollar someone would be posting about it trying to rub it in my face :D
 
Do you mean those 4 year old posts from reddit? :p

:D Yep.
I doubt that the engine has changed significantly between the two games.


Yeah, that is interesting. I do not play that kind of game anyway and I do wonder if in the future they will start making use of 20gb+ of vram to load in higher LOD's and not bother to transition.

The truth is however in most cases they do not bother putting in any work to cater for the 1% of people who would have such cards.

I suspect, they will either continue as is and simply increase the resolution of textures and geometry detail or they will change LOD distance to reduce pop in. Or a mix of both. We will find out soon enough.
It would be nice to hide behind cover in game and it doesn't look like a low res mess.
 
:D Yep.
I doubt that the engine has changed significantly between the two games.




I suspect, they will either continue as is and simply increase the resolution of textures and geometry detail or they will change LOD distance to reduce pop in. Or a mix of both. We will find out soon enough.
It would be nice to hide behind cover in game and it doesn't look like a low res mess.
:D
 
But the GDDR6X memory is much faster so presumably can swap in and out the data. Also once this RTX IO/Direct Storage thing gets implemented perhaps that will help as well.

I do see how 10GB is probably the minimum amount of VRAM they could have put on the card, but the next gen consoles will not have access to a full 16GB for VRAM. The OS and the game will use some of this just to power the box and run the game.

So perhaps if we give the console the benefit of the doubt perhaps it will have 12GB of VRAM available to it to play with for games.

And do we really think the consoles OS + game ram only uses up 4GB...?

Games these days on the PC use around 8GB ram, Windows is around 4GB ram and then Vram is around 5-8GB generally.

Yeah it's going to be absolute tops 10Gb available for vRAM-equivalent purposes, and honestly more likely to be closer to 8Gb. And that actually makes sense because the GPUs in these consoles are kinda 2070 territory in terms of TFLOPs performance, and that's an 8Gb card. These new consoles aren't going to set new standards for memory usage they're already behind what the PC can do by a generation and they're not even out yet. The consoles suffer the same bottlenecks the PC does, they have GPUs and vRAM equivalent, and you can't just load up like 12Gb worth of assets into vRAM and throw that at the kind of slow GPU they have, and expect good frame rates. They'll GPU bottleneck super fast.
 
Thing is though that even if the consoles only have 16Gb combined memory everyone is forgetting how bad PC ports are. A brute force approach is normally required to meet PCMR expectations
 
Status
Not open for further replies.
Back
Top Bottom