• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

10GB vram enough for the 3080? Discuss..

Status
Not open for further replies.
BTW as I said before there are no tools to monitor ACTUAL vram usage. It's the same exact reason why these arguments still persist. If there was? and you could show it? the conspiracy theorists would calm. It was *exactly* the same old borrocks that people used to spout about Crossfire. "I don't have any problems" and gems such as "never stutters for me !" and then Ryan Shrout started playing with FCAT and showed the 1% lows (which at the time no one even knew existed, and let alone could show !) and the frame times. And the dropped frames, the incomplete frames and the runt frames (IE, fake frames that were not completed beefing up the FPS count but doing nothing but making it a stuttery mess. And, as sure as eggs I came here to see if any one else was noticing the awful stutter and input lag and got told I was stupid and crazy. Yeah, right.

Once AMD were outed? yeah, at that point they did fix it, but all of a sudden you got far less FPS than SLi. And the thing is? AMD knew all of this. To say they didn't is just insane. Yet people were literally doing their bidding for them going around saying Crossfire was awesome and had no issues :rolleyes:

I really don't know how likely it is that we will ever be able to monitor VRAM use in real time. I won't "never say never" as it were, but I would bet Nvidia will do their darndest to stop it. All I know is what happens when you don't have enough. Fool me once etc. Well it has happened to me three times. It wouldn't surprise me at all if AMD were only putting so much on their GPUs now so they don't end up with egg on their face again. Whilst Nvidia simply just don't care.
 
They do that now. Unfortunately DDR is nowhere near as fast as GDDR and it utterly tanks your performance.

"Fast" is not quite the full picture... Using system RAM would tank your GPU performance, correct, because DDR is optimised for low-latency returning of smaller chunks of data. But using GDDR for your system RAM could also tank your CPU performance, because GDDR is optimised to return large pieces of data very fast but at the cost of higher latency. DDR supports CPUs which want to do lots of different things all over the place, vs GDDR for GPUs that (generally) want to do operations in parallel against large data sets.

There's a reasonable write-up of some of it here - https://chipsandcheese.com/2021/04/16/measuring-gpu-memory-latency/

tl;dr - "The i7-4770 with DDR3-1600 CL9 can do a round trip to memory in 63ns, while a 6900 XT with GDDR6 takes 226 ns to do the same"

Newer DDR4 and 5 systems should reduce the CAS latency further (I'm seeing 'high-end' DDR5 with 10ns advertised, though that's just CAS, not the whole round trip). Where GDDR shines is that when the data does start flowing, you get a lot more of it a lot faster. The article also mentions "As an interesting thought experiment, hypothetical Haswell with GDDR6 memory controllers placed as close to L3 as possible may get somewhere around 133 ns." So on a more modern system than Haswell, using GDDR6X, maybe you can cut that more? It's hard to find numbers. This would make it close to older system RAM for latency, but much wider for data transfer. So then you can get a machine like the PS5 and Xbox Series S/X which have unified GDDR memory. Their CPUs are probably suffering a little on RAM latency compared to a gaming PC, but it lets them share the RAM and still feed the GPU in the way it wants (which is probably your priority in a console).
 
Last edited:
BTW as I said before there are no tools to monitor ACTUAL vram usage. It's the same exact reason why these arguments still persist. If there was? and you could show it? the conspiracy theorists would calm. It was *exactly* the same old borrocks that people used to spout about Crossfire. "I don't have any problems" and gems such as "never stutters for me !" and then Ryan Shrout started playing with FCAT and showed the 1% lows (which at the time no one even knew existed, and let alone could show !) and the frame times. And the dropped frames, the incomplete frames and the runt frames (IE, fake frames that were not completed beefing up the FPS count but doing nothing but making it a stuttery mess. And, as sure as eggs I came here to see if any one else was noticing the awful stutter and input lag and got told I was stupid and crazy. Yeah, right.

Once AMD were outed? yeah, at that point they did fix it, but all of a sudden you got far less FPS than SLi. And the thing is? AMD knew all of this. To say they didn't is just insane. Yet people were literally doing their bidding for them going around saying Crossfire was awesome and had no issues :rolleyes:

I really don't know how likely it is that we will ever be able to monitor VRAM use in real time. I won't "never say never" as it were, but I would bet Nvidia will do their darndest to stop it. All I know is what happens when you don't have enough. Fool me once etc. Well it has happened to me three times. It wouldn't surprise me at all if AMD were only putting so much on their GPUs now so they don't end up with egg on their face again. Whilst Nvidia simply just don't care.

I wrote a crude tool to visualise it way back before the mainstream tech people got in on it:

(EDIT: That is actually a very old version of it - forgot I did a load more work on it since then with fancy graphs)

eIZHbx7.png

The 2 cubes would rotate around a central point using the frametime data, one matching the supposed FPS, the other the actual frames which much more easily showed the difference in smoothness and the effect of runt frames even with small variances.

And was posting about the microstutter issues with both SLI and Crossfire usually just to get rubbished, etc. especially by the pro-ATI/AMD crowd...

nVidia's performance tools do actually give you a pretty good idea of what is going on in VRAM and what the state of a lot of data is - but it is non-trivial to setup let alone understanding what it is telling you and ideally needs a debug environment to be useful (which a lot of commercial games will shut you right out of).
 
Last edited:
BTW as I said before there are no tools to monitor ACTUAL vram usage. It's the same exact reason why these arguments still persist. If there was? and you could show it? the conspiracy theorists would calm. It was *exactly* the same old borrocks that people used to spout about Crossfire. "I don't have any problems" and gems such as "never stutters for me !" and then Ryan Shrout started playing with FCAT and showed the 1% lows (which at the time no one even knew existed, and let alone could show !) and the frame times. And the dropped frames, the incomplete frames and the runt frames (IE, fake frames that were not completed beefing up the FPS count but doing nothing but making it a stuttery mess. And, as sure as eggs I came here to see if any one else was noticing the awful stutter and input lag and got told I was stupid and crazy. Yeah, right.

Once AMD were outed? yeah, at that point they did fix it, but all of a sudden you got far less FPS than SLi. And the thing is? AMD knew all of this. To say they didn't is just insane. Yet people were literally doing their bidding for them going around saying Crossfire was awesome and had no issues :rolleyes:

I really don't know how likely it is that we will ever be able to monitor VRAM use in real time. I won't "never say never" as it were, but I would bet Nvidia will do their darndest to stop it. All I know is what happens when you don't have enough. Fool me once etc. Well it has happened to me three times. It wouldn't surprise me at all if AMD were only putting so much on their GPUs now so they don't end up with egg on their face again. Whilst Nvidia simply just don't care.

Just a small point, but I believe Techreport (RIP) were the first tech site to start digging and considering 1% lows and frametimes back in 2011.

Scott Wasson article: https://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking/
 
Last edited:
Pretty sure every reviewer was raving about the 3080 at launch as it crushed the 2080ti for almost half the money while being around 70-80% faster than the 2080, its probably the best high end card Nvidia have released in the last 5 years in terms of price to performance.

The 12gb model was released so Nvidia could charge AiBs a lot more cash and take a bigger share of the mining windfall.

As I said earlier, how many would have prefered paying for a 3080 built on a GA104 die with a 256 bus and 16gb VRAM over a GA102, 320 bus and 10gb? Well thats the route Nvidia have now gone down with the 4080 and look how **** that is for the money.

Nvidia did people 0 favours with the 30 series apart from the mere fact they went with Samsung. That was the only reason they were cheaper than the 20 series. It had absolutely nothing to do with them being nice, the profit margin was identical.

Just one problem. Being the elitists they are they were not happy that AMD not only got close but in pure raster were often level, and, oh no, at points faster. With more VRAM.

So they went back to TSMC, for the clear win they so desperately wanted yet look at the price. They make 20 series look cheap lol.

As I say, their profit margin has been the same for years. I could have sworn it was 60% on everything sold, but I could swear I heard mutterings recently of it now being 70%.

All the time blaming it on everything else like exchange rates, inflation, global hunger etc etc.

OK so that last one was me being sarcastic but I'm sure you catch my drift.
 
I wrote a crude tool to visualise it way back before the mainstream tech people got in on it:

(EDIT: That is actually a very old version of it - forgot I did a load more work on it since then with fancy graphs)

eIZHbx7.png

The 2 cubes would rotate around a central point using the frametime data, one matching the supposed FPS, the other the actual frames which much more easily showed the difference in smoothness and the effect of runt frames even with small variances.

And was posting about the microstutter issues with both SLI and Crossfire usually just to get rubbished, etc. especially by the pro-ATI/AMD crowd...

nVidia's performance tools do actually give you a pretty good idea of what is going on in VRAM and what the state of a lot of data is - but it is non-trivial to setup let alone understanding what it is telling you and ideally needs a debug environment to be useful (which a lot of commercial games will shut you right out of).

Humans are strange creatures dude.

It's like some sort of herd mentality and no matter how knowledgeable and clever someone may be they just refuse to see the truth and or accept it until it becomes herd mentality, if that makes sense.

Like, imagine back then when you were slaying with that sort of information that you had a crystal ball and ten years later the whole thing would be flipped on its head and people these days would be demanding that sort of info and using it to actually judge performance?

Crazy dude. Just goes to show that absolutely nothing ever changes either haha.
 
"Fast" is not quite the full picture... Using system RAM would tank your GPU performance, correct, because DDR is optimised for low-latency returning of smaller chunks of data. But using GDDR for your system RAM could also tank your CPU performance, because GDDR is optimised to return large pieces of data very fast but at the cost of higher latency. DDR supports CPUs which want to do lots of different things all over the place, vs GDDR for GPUs that (generally) want to do operations in parallel against large data sets.

There's a reasonable write-up of some of it here - https://chipsandcheese.com/2021/04/16/measuring-gpu-memory-latency/

tl;dr - "The i7-4770 with DDR3-1600 CL9 can do a round trip to memory in 63ns, while a 6900 XT with GDDR6 takes 226 ns to do the same"

Newer DDR4 and 5 systems should reduce the CAS latency further (I'm seeing 'high-end' DDR5 with 10ns advertised, though that's just CAS, not the whole round trip). Where GDDR shines is that when the data does start flowing, you get a lot more of it a lot faster. The article also mentions "As an interesting thought experiment, hypothetical Haswell with GDDR6 memory controllers placed as close to L3 as possible may get somewhere around 133 ns." So on a more modern system than Haswell, using GDDR6X, maybe you can cut that more? It's hard to find numbers. This would make it close to older system RAM for latency, but much wider for data transfer. So then you can get a machine like the PS5 and Xbox Series S/X which have unified GDDR memory. Their CPUs are probably suffering a little on RAM latency compared to a gaming PC, but it lets them share the RAM and still feed the GPU in the way it wants (which is probably your priority in a console).

Thanks for that dude. Bit pinched for time at the moment. I thought I recalled it being a lot more in depth than that (being my brief explanation), and that GDDR was like, "Cruder" than actual physical RAM, yet like you point out each are useless for the other job.
 
Last edited:
They effectively costed 2 gigs of VRAM at £350, crazy stuff.

I missed this one. Sorry dude, lots going on.

Yeah that's insane, but it gives you a good solid idea of what they want in £ for something that will last the distance, doesn't it? Compared to something that they know for full well, won't.

As it happens like I said my pal paid £100 more, and I specifically told him to because he wants to upgrade to a 4k monitor at some point.
 
Surely the right response would have been to tell him to save the money now, and get what he needs when he actually upgrades to 4k.

This was several months ago. Another place had the Strix 12gb on special for £720, so he got that.

He didn't have much of a choice. His Vega 56 crapped the bed, and he needed a GPU. TBH even that was too much, but after the drought it didn't seem that bad. Not like he can get much now for that price that would beat it either, so I would say he made out OK.

He games dude. Every single night. So sitting around waiting is not an option.
 
I missed this one. Sorry dude, lots going on.

Yeah that's insane, but it gives you a good solid idea of what they want in £ for something that will last the distance, doesn't it? Compared to something that they know for full well, won't.

As it happens like I said my pal paid £100 more, and I specifically told him to because he wants to upgrade to a 4k monitor at some point.
The 12GB 3080 release had nothing to do with going the distance or thinking VRAM was an issue and everything to do with getting more of the profits that were otherwise going to scalpers/AIBs :p
 
PCGH have their benchmarks up now and with the latest patch so more relevant than all the other reviews so far :)

DX12, max. details, MAX raytracing, FoV +20, quality upsampling – rBAR/SAM on

zrDgZAa.png

Ooooooooooof, "is 24GB enough?"

Guess still fingers in ears and hub are the only ones to listen too though eh.... :cry:



I'm still playing with RT off due to all the graphical artifacts that have been noted, hopefully devs can fix/improve it but seems like they just stuck it on top last minute.
 
Last edited:
PCGH have their benchmarks up now and with the latest patch so more relevant than all the other reviews so far :)



zrDgZAa.png

Ooooooooooof, "is 24GB enough?"

Guess still fingers in ears and hub are the only ones to listen too though eh.... :cry:



I'm still playing with RT off due to all the graphical artifacts that have been noted, hopefully devs can fix/improve it but seems like they just stuck it on top last minute.
7900xt/x is in the same boat as the consoles -> plenty of vRAM, not enough horsepower.
 
So
PCGH have their benchmarks up now and with the latest patch so more relevant than all the other reviews so far :)



zrDgZAa.png

Ooooooooooof, "is 24GB enough?"

Guess still fingers in ears and hub are the only ones to listen too though eh.... :cry:



I'm still playing with RT off due to all the graphical artifacts that have been noted, hopefully devs can fix/improve it but seems like they just stuck it on top last minute.
So the 10gb 3080 is 43% faster than a 16gb 6800XT and even 6% faster than AMDs brand new £900 GPU but lets all moan about the 3080.
 
PCGH have their benchmarks up now and with the latest patch so more relevant than all the other reviews so far :)



zrDgZAa.png

Ooooooooooof, "is 24GB enough?"

Guess still fingers in ears and hub are the only ones to listen too though eh.... :cry:



I'm still playing with RT off due to all the graphical artifacts that have been noted, hopefully devs can fix/improve it but seems like they just stuck it on top last minute.


Whatbgamenisbthat
 
But but but hub...... :p

TBF I think they did mention textures stream in is more obvious with lesser GPUs or something (when using max settings with RT and no upscaling) but it's a bit pointless that anyway given that the engine.ini manual tweaks fix/improve this.

Point is though, it's only hub so far who have different performance figures even though testing is in same place as other sites too, patch also fixed a lot of the issues so there's that too.

Whatbgamenisbthat
Hogwarts
 
That patch and the ini tweaks completely fixed Hogwarts for me on a 10GB 3080. 1440p, RT on, ultra textures, DLSS Q. No stuttering or slowdowns anymore anywhere, I lock the fps at 60fps, the frametime is a flat line and the gpu barely gets over 1000mhz sipping about 150w max.

I’m not in denial about vram issues going forward, but I do think clickbait headlines like the hub Hogwarts review need to be scrutinised and revisited once games are patched/fixed (or maybe nerfed in the case of W3 4.0.1 patch!)
 
I appear to have killed the inner knitting circles fun with that last benchmark.....

zXlGQBQ.gif

I guess 24gb isn't enough........

WlCUzlc.gif

See you all back for the next game!

:p ;) :D

That patch and the ini tweaks completely fixed Hogwarts for me on a 10GB 3080. 1440p, RT on, ultra textures, DLSS Q. No stuttering or slowdowns anymore anywhere, I lock the fps at 60fps, the frametime is a flat line and the gpu barely gets over 1000mhz sipping about 150w max.

I’m not in denial about vram issues going forward, but I do think clickbait headlines like the hub Hogwarts review need to be scrutinised and revisited once games are patched/fixed (or maybe nerfed in the case of W3 4.0.1 patch!)

But but but hub..... :p


Agree though, game has been great since last patch, I removed the .ini tweaks as didn't notice any improvement with them for the last patch. However, have done these tweaks, which helped (helps with some games these I find, good old windows harming performance more often than not!):


In terms of my settings and experience so far since the patch and above:

Textures set to ultra and a mix of high/ultra for the rest (since as per comparisons/reviews, either no difference or high looks better than ultra) with RT off and dlss balanced, I'm getting a locked 4k/60fps and at 3440x1440 with dlss quality 100+ fps. There is some traversal stutter especially in hogsmeade or when entering new rooms in hogwarts but as noted by review sites, this happens even on the best of the best hardware so not a case of "vram", I'm not surprised there are issues here though given how detailed the game world is and how big it is, this is where having direct storage would solve/improve this.

What I will say though is that I have found the experience better/smoother on my 4k display since fps is capped to 60, bit weird since fps is over 100 on on the aw monitor but it doesn't feel/look as smooth and there are no issues showing in frame latency either so maybe something down to animations not playing nice at anything above 60 fps?

I'm still keeping RT off as the artifacts with it on break the immersion and can look worse than raster in some cases especially where distant views are concerned, devs are working on improving it though so fingers crossed.

EDIT:

Also, just on this point:

hub Hogwarts review need to be scrutinised and revisited once games are patched/fixed

Very true. Quite funny how they stated that TPU results were "wrong" yet jansns, computerbase, pcgamershardware etc. all show similar and different to hub, yup everyone is wrong except them though :cry:
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom