• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

Agreed, but at the end of the day it's down to how well the developer optimises the engine regarding the various hardware out there.

I had no issue with GOW using 20GB+ It was just strange to see :D And performance never faltered.

Although I don't agree with the 3080 10GB enough argument/thread. I did on one occasion in Cyberpunk have the fps tank when the engine hit the 10GB ceiling in one location I found (when I owned a 3080).

My view is that the engine should not of got to that limit to result in massive performance loss. I feel there was no reason to have all 10GB populated for what was happening on screen. IMO ;)
And that's exactly what it comes down to. Thankfully most of the games are very good for managing vram when it comes to textures swapping in and out.

There is also potentially nvidia driver side handling vram usage more efficiently than amd, iirc, TPU made comment on this as they noticed nvidia used less vram across several games. I haven't looked into this much myself though.
 
There is a point where the heat dumped into your room isn't trivial.

Some people complain about power prices and gpu power draw but it's actually not much - for example let's say a 4090 is 500w compared to 350w of the 3090 and your per kWh electricity rate was $0.30 and you had a ton of free time so you average 5 hours a day playing games at your gpu is always running at 100% full load in that 5 hours.

Over 1 year, that's an extra $82 on the electricity bill. Now if someone wants to say that's too much, they must be having a laugh because how can someone have the money to afford a $1.5k GPU but can't pay an extra $7 per month on their electricity bill - it doesn't make any sense.
 
My 3080Ti requires a window-mount AC unit to offset the heat it (and the rest of my PC) dumps into my room. I tried turning down the temp on the house's central AC but, when I turned it down enough to be comfortable while in my sim rig, my wife was getting frozen out of any other space in the house.

My 3080Ti is a 400w space heater that coincidentally also pushes pixels to my Reveb pretty well.

Another 200w and I will probably turn the room's closet into an enclosed, climate-controlled space, like a miniature server room, and exhaust all the heat directly into my back yard. (Will need to go through concrete for that though.)

-Put my PC in the Nvidia closet and run the cables into the room through a smallish passthrough. This is getting ridiculous.
 
Last edited:
Indeed. I still think AMD are being underestimated this round. If they are lower power and offer better rasterization I'll go for it. Have to see what uplift I can get over a 3060Ti and what it will cost me.
AMD have the better rasterization vs VRAM balance, Nvidia with ampere seemed to be focusing on the high fps/refresh crowd who tend to tone down visual quality and just want performance. Curious if Nvidia will address this on 4000 series.

I also firmly now have the opinion I dont care about the dedicated RT hardware, I would consider a AMD swap if they have MSRP FE equivalent sold in UK, good power consumption and their drivers were up to par (which the drivers might be now).
 
And that's exactly what it comes down to. Thankfully most of the games are very good for managing vram when it comes to textures swapping in and out.

There is also potentially nvidia driver side handling vram usage more efficiently than amd, iirc, TPU made comment on this as they noticed nvidia used less vram across several games. I haven't looked into this much myself though.
Sadly I thnk TPU analysis of VRAM usage is pretty bad, the only reviewer who handles this stuff properly in my opinion is digital foundry. e.g. DF pointed out on FF7 remake that the game of course has a wide spread stuttering problem which has been proven to be caused by textures and shaders swapping in and out of VRAM. DF noted that everytime the game stutters, commonly used performance metric tools dont pick it up so someone analysing gpuz or afterburner logs would see no performance issues. Whilst DF have their own external tools (which they had to develop as they do consoles a lot that dont have their own internal tools) that did pick up the stutters. They had the balls to tweet out the review industry got FF7R wrong but sadly didnt have the balls to put it in their video.

Another issue the review industry has with VRAM analysis is they dont analyse texture streaming and texture quality, so e.g. a 8 gig VRAM card might gets the same framerate on ultra textures as a 12 gig VRAM card, but if you examine the textures they might be lower quality, as all UE4 games dont have static texture quality, they have "variable" quality which dynamically controls the texture LOD based on VRAM and other resource availability (the texture quality setting in UE4 games sets the max variable quality its is not guaranteed). I remember tinkering in FF7r on the UE4 console trying to turn this behaviour off but seems its difficult to do. Then of course you also have texture pop in issue which again increase when VRAM is under stress. (Note on the 10 gig 3080 thread someone kindly made screenshots for me from a game where he said he couldnt tell the difference between different texture quality levels and I could immediately see the difference, so different gamers have different sensitivity to it), for me texture quality is one of the most import quality metrics in games. its very jarring to see a high res character standing next to a low res blurry wall.

I compared playing FF7R on the PS5 to PC was set to high detail textures the max setting. In chapter 8 as an example the PC was loading the textures the PS4 had very low res PS3 quality type textures, PS5 has a much higher LOD, but in other areas the PC matches the PS5, chapter 8 also has a lot of pop in issues on the PC, which suggests its demanding VRAM wise. In areas like sector 7 (chapter 3). especially on the yuffie DLC where they expanded the explorable area, there is various stutters that can happen due to the sheer amount of objects that get streamed in and out.

I found someone with a 3090 who was prepared to help me with testing we did it live on discord, and we discovered together if you manually increase the VRAM budget on the UE4 console on FF7r, most of the issues disappear, chapter 8 textures are like the PS5. Stuttering doesnt vanish but its 90% better, pop ins go away, however by default on a 3090 the game doesnt increase the VRAM budget my much on a 3090 by itself, so if you was just a dumb tester like the TPU guys seem to be, and you just swapping hardware, you not really testing 24 gig VRAM as the game isnt been allowed to use it, it has to be manually configured on the UE engine. On my 3080 it sets the VRAM budget to 5500MB, on the 3090 it was set to 6500. When we bumped it manually to 20000 it was way better. I should also note if I left it at 5500 my game crashed with VRAM OOM, I have to reduce it to 5000 or lower (or drop shadows or textures to low quality) as the game also uses VRAM for RAM type stuff. Something which is going to get more common on newer console ports due to the shared memory architecture. FF7r has extremely low RAM usage as most RAM type stuff is loaded into VRAM instead. e.g. it needs over 3 gig of VRAM just to get to the title menu. The new far cry game was coded the same way originally and the patch that mitigated VRAM issues moved RAM stuff from VRAM to RAM to free up VRAM for textures.

FF15 another game with VRAM issues and the reason I originally upgraded my 1070 to a 1080ti. This game had amazing visuals when you turned on all the hidden options and used alongside the 4k texture pack, but the VRAM usage was crazy high. In addition if Nvidia grass was enabled it had a nasty memory leak, the game code overflowed VRAM usage to system RAM, so if you ran out of VRAM the game didnt crash, you just had a large performance loss, however it carried on leaking and the entire windows OS would hit critical memory condition if you didnt restart the game before all virtual memory was consumed, FF15 also is what triggered me to upgrade from 16 gigs of ram to 32. This bug never got patched. Likewise ff7r issues havent been patched.

I hope this post explains how easy it is for reviewers to not analyse these problems properly (I have read TPU analysis of VRAM's effect on games and its just embarrassingly lazy), and when I think about every game I have played that has stuttering issues (usually JRPGs), the problems are always linked to texture asset swapping. Texture asset swapping is basically a hack to get round under spec'd memory resources. Initially developed for xbox360 and PS3 which had an extremely low amount of memory just a few hundred megs. Ironically the industry has kind of done a 180 and its now PC gpu's that may be under spec'd for VRAM with consoles having more generous amounts. The UE4 engine with the development of things like directstorage seems to finally have its issues resolved, but of course PCs dont yet have that API so the problems remain.
 
Last edited:
Cos 4090 goes brrrrrr, there will always be a group that will pay anything to get the latest and greatest. It is what it is.
There will but this time around that group will be much smaller. A lot of people only bought the 3090 because A they couldn't get a 3080, B mining was a thing so they justified the price by the ability to mine back the difference and C the VRAM crew who thought 10gb was a bit lean so payed £750 extra so they could feel good about running HD textures in FC6.
 
Last edited:
  • Haha
Reactions: TNA
There will but this time around that group will be much smaller. A lot of people only bought the 3090 because A they couldn't get a 3080, B mining was a thing so they justified the price by the ability to mine back the difference.

That is true, but then again the rumours are that Nvidia will launch the 4090 2/3 months before any other RTx4000 cards and that get more sales than they otherwise have for it

But yes, I confirmed that with a pc store which told me they did sell a lot of 3090's because people who would come in or go online looking for a 3080 and had a choice, wait 3 months for a 3080 or get a 3090 today and they coughed up for the latter and walked out the store with the 3090
 
That is true, but then again the rumours are that Nvidia will launch the 4090 2/3 months before any other RTx4000 cards and that get more sales than they otherwise have for it

But yes, I confirmed that with a pc store which told me they did sell a lot of 3090's because people who would come in or go online looking for a 3080 and had a choice, wait 3 months for a 3080 or get a 3090 today and they coughed up for the latter and walked out the store with the 3090
Trust me Jensen isn't going to stand there and say we only got one card releasing for the next 3 months.
 
AMD have the better rasterization vs VRAM balance, Nvidia with ampere seemed to be focusing on the high fps/refresh crowd who tend to tone down visual quality and just want performance. Curious if Nvidia will address this on 4000 series.
Sadly I thnk TPU analysis of VRAM usage is pretty bad.. DF pointed out on FF7 remake that the game of course has a wide spread stuttering problem which has been proven to be caused by textures and shaders swapping in and out of VRAM. DF noted that everytime the game stutters, commonly used performance metric tools dont pick it up so someone analysing gpuz or afterburner logs would see no performance issues.

Another issue the review industry has with VRAM analysis is they dont analyse texture streaming and texture quality. I found someone with a 3090 who was prepared to help me with testing we did it live on discord, and we discovered together if you manually increase the VRAM budget on the UE4 console on FF7r, most of the issues disappear.

FF15 another game with VRAM issues. I hope this post explains how easy it is for reviewers to not analyse these problems properly... so the problems remain.

Welcome, thanks for sharing. Sadly its not an issue apparently. I can see you hit the nail on the head so full marks on your synopsis.
 
@Chris Thanks for the big write-up!

Seems that reviewers really are missing a lot by just running canned benchmarks without thinking too much. All this dynamic stuff can really make this all rather unpredictable. We'd hope reviewers would eventually figure it out, but since it is hard work I am not sure they will.
 
Sadly I thnk TPU analysis of VRAM usage is pretty bad, the only reviewer who handles this stuff properly in my opinion is digital foundry. e.g. DF pointed out on FF7 remake that the game of course has a wide spread stuttering problem which has been proven to be caused by textures and shaders swapping in and out of VRAM. DF noted that everytime the game stutters, commonly used performance metric tools dont pick it up so someone analysing gpuz or afterburner logs would see no performance issues. Whilst DF have their own external tools (which they had to develop as they do consoles a lot that dont have their own internal tools) that did pick up the stutters. They had the balls to tweet out the review industry got FF7R wrong but sadly didnt have the balls to put it in their video.

Another issue the review industry has with VRAM analysis is they dont analyse texture streaming and texture quality, so e.g. a 8 gig VRAM card might gets the same framerate on ultra textures as a 12 gig VRAM card, but if you examine the textures they might be lower quality, as all UE4 games dont have static texture quality, they have "variable" quality which dynamically controls the texture LOD based on VRAM and other resource availability (the texture quality setting in UE4 games sets the max variable quality its is not guaranteed). I remember tinkering in FF7r on the UE4 console trying to turn this behaviour off but seems its difficult to do. Then of course you also have texture pop in issue which again increase when VRAM is under stress. (Note on the 10 gig 3080 thread someone kindly made screenshots for me from a game where he said he couldnt tell the difference between different texture quality levels and I could immediately see the difference, so different gamers have different sensitivity to it), for me texture quality is one of the most import quality metrics in games. its very jarring to see a high res character standing next to a low res blurry wall.

I compared playing FF7R on the PS5 to PC was set to high detail textures the max setting. In chapter 8 as an example the PC was loading the textures the PS4 had very low res PS3 quality type textures, PS5 has a much higher LOD, but in other areas the PC matches the PS5, chapter 8 also has a lot of pop in issues on the PC, which suggests its demanding VRAM wise. In areas like sector 7 (chapter 3). especially on the yuffie DLC where they expanded the explorable area, there is various stutters that can happen due to the sheer amount of objects that get streamed in and out.

I found someone with a 3090 who was prepared to help me with testing we did it live on discord, and we discovered together if you manually increase the VRAM budget on the UE4 console on FF7r, most of the issues disappear, chapter 8 textures are like the PS5. Stuttering doesnt vanish but its 90% better, pop ins go away, however by default on a 3090 the game doesnt increase the VRAM budget my much on a 3090 by itself, so if you was just a dumb tester like the TPU guys seem to be, and you just swapping hardware, you not really testing 24 gig VRAM as the game isnt been allowed to use it, it has to be manually configured on the UE engine. On my 3080 it sets the VRAM budget to 5500MB, on the 3090 it was set to 6500. When we bumped it manually to 20000 it was way better. I should also note if I left it at 5500 my game crashed with VRAM OOM, I have to reduce it to 5000 or lower (or drop shadows or textures to low quality) as the game also uses VRAM for RAM type stuff. Something which is going to get more common on newer console ports due to the shared memory architecture. FF7r has extremely low RAM usage as most RAM type stuff is loaded into VRAM instead. e.g. it needs over 3 gig of VRAM just to get to the title menu. The new far cry game was coded the same way originally and the patch that mitigated VRAM issues moved RAM stuff from VRAM to RAM to free up VRAM for textures.

FF15 another game with VRAM issues and the reason I originally upgraded my 1070 to a 1080ti. This game had amazing visuals when you turned on all the hidden options and used alongside the 4k texture pack, but the VRAM usage was crazy high. In addition if Nvidia grass was enabled it had a nasty memory leak, the game code overflowed VRAM usage to system RAM, so if you ran out of VRAM the game didnt crash, you just had a large performance loss, however it carried on leaking and the entire windows OS would hit critical memory condition if you didnt restart the game before all virtual memory was consumed, FF15 also is what triggered me to upgrade from 16 gigs of ram to 32. This bug never got patched. Likewise ff7r issues havent been patched.

I hope this post explains how easy it is for reviewers to not analyse these problems properly (I have read TPU analysis of VRAM's effect on games and its just embarrassingly lazy), and when I think about every game I have played that has stuttering issues (usually JRPGs), the problems are always linked to texture asset swapping. Texture asset swapping is basically a hack to get round under spec'd memory resources. Initially developed for xbox360 and PS3 which had an extremely low amount of memory just a few hundred megs. Ironically the industry has kind of done a 180 and its now PC gpu's that may be under spec'd for VRAM with consoles having more generous amounts. The UE4 engine with the development of things like directstorage seems to finally have its issues resolved, but of course PCs dont yet have that API so the problems remain.
Agree overall.

Not sure I would divide the amd vs nvidia crowd as simply as just being "high fps/refresh vs visual quality" as nvidia often beat amd at 4k, rasterization wise, amd would have the lead but ray tracing (which is all about visuals), well we all know, rdna 2 is useless here and seems it is the amd owners who care more about FPS/perf here over having better visuals with RT (suspect that will change when amd catch up though ;))

TPU isn't in depth but it does give an indication that nvidia possibly do handle it more efficiently than amd on the driver side or another way to look at it, perhaps nvidia intentionally do this to avoid having vram issues..... I believe PCgamerhardware or/and computerbase also have noted the differences but not into the same detail as DF (sadly a lot of people don't take them seriously though because "shills")

Although I wouldn't necessarily say consoles have a "generous" amount of vram, as you said, it is still shared memory so realistically they don't really have 16GB VRAM, not to mention throw in the fact that they are reducing settings across the board with very limited/zero RT (increases vram usage quite a bit) and also adaptive resolution (as you know, has a big impact on vram usage) in order to hit 60 fps.

As touched upon in your post, reviewers/end users also need to know/learn the difference between what is a "bug" and a genuine vram limitation e.g. HZD, you had people using that as a showcase for vram limitations, even though there were reports from higher vram cards reporting texture rendering issues too (iirc, it was also affecting amd hardware more as well) and then when you look at the HZD game patch notes, there are several patches indicating them fixing/addressing the texture/vram leaks/issues then look at the fiasco of FC 6 and it's texture rendering issues, even the consoles were having issues and ubis very own PR footage showed the issue but nope "needs more vram", fast forward a couple of months, texture rendering issue was fixed.

EDIT:

Also reviewers/end users also need to know/learn about what is actually happening when a game stutters, fps drops (symptoms of vram issues/shortage) as these can be down to the game engine itself and not vram e.g. UE 4, if the game is stuttering, you'll have some saying "oh not enough vram" but it's actually down to the shader compile. Again, something DF have always brought up but often ignored by some folks.
 
Last edited:
Yeah its not a like for like comparison, I expect the OS itself on the consoles is much more lightweight than windows, its designed specifically for what the consoles do. So it wont have the same memory overheads as windows OS does. That I expect will compensate to a degree for the lack of system RAM. I expect the other part of the jigsaw that makes it work is the consoles have the optimised API for storage performance which I believe will fix the issues on PC if MS ever get round to launching it. So I think its a combination of increasing VRAM and working on optimisation for asset streaming, as we not going to be able to get rid of streaming in modern games, it was I think still happening on the 3090 tests just much less aggressively. From the UE4 debug log in an area that caused a stutter every single time we seen, it would load the textures for next areas as approaching it, on the 3080 it then flushed shaders to make room after loading about 30% of the textures, and proceeded to reload those same textures again, then right after reloaded the shaders again, whilst the 3090 just loaded the new textures once and that was it. There was a nasty inefficient chain reaction.

I can justify my belief in directstorage in fixing this, I forgot to put in my post that I discovered if I either reduced the threading for texture loading on UE4 settings, or throttled the texture loading speed, the stutters stopped or became micro stutters, so there is a bottleneck when you try to load data too quick on existing APIs, I also discovered moving the game from a NVME to a spindle had the same effect. However it wasnt cost free, loading screens were much slower and was more pop ins plus textures took longer to hit max LOD when moving towards them.

According to DF's FF7R PS5 review the resolution for the vast majority of the time stays locked. They gave it massive praise and were genuinely surprised how bad it was on PC.

I would be open to a system been introduced for upgrading VRAM on an existing card. Like one can do with system RAM. Either via empty slottable VRAM sockets, or using NVLINK for a low end card to be hooked alongside just to benefit from using it as a VRAM expansion.
 
Last edited:
I would be open to a system been introduced for upgrading VRAM on an existing card. Like one can do with system RAM. Either via empty slottable VRAM sockets, or using NVLINK for a low end card to be hooked alongside just to benefit from using it as a VRAM expansion.

That would be great, but from watching Buildzoid videos, I can't see how that could ever be implemented when memory chips need to be located as close as possible to the core, hence surrounding the core front and back, also the power stages, and on top of that the need for cooling, work how hot these modules get.
Then there would be the cost involved.

Edit:
Years ago when I 'played' at upgrading my gaming laptop there were mxm GPU cards from Nvidia and ATI/AMD, they were am upgrade option, but very costly. And I think have died a death now.
 
Inevitably there would be a cost penalty, but probably not as much as having to buy a replacement GPU just to upgrade your VRAM. The losers would be those who dont care for VRAM as that cost would likely be spread across all cards sold.

Realistically is never going to happen. :) Just a dream from myself.
 
Upgradeable with full speed chips (GDDR6, GDDR6X, GDDR7, HBM) would be very hard to implement. Signal trace lengths etc. would make that next to impossible.

What might be possible, would be some local cache level DDR5 etc, but even there. The trend on mobile is to solder RAM now anyway, and the various low voltage chips are fast but not able to be used in DIMM type packages. Apple went one further by moving the RAM on package - and no that was it was just to prevent user upgrades, there are actual technical reasons too.
 
Love the new GPU cycle from: New GPU announced - It's going to be twice as fast!!!! -> Oh look it's a few percent faster -> New GPU announced - This time, this time it's going to be a massive leap -> Repeat.
Yup, been through this **** too many times to be caught out with that again. Looks promising though, and hopefully will translate to gaming rather than being a big synthetic jump.
 
Yeah its not a like for like comparison, I expect the OS itself on the consoles is much more lightweight than windows, its designed specifically for what the consoles do. So it wont have the same memory overheads as windows OS does. That I expect will compensate to a degree for the lack of system RAM. I expect the other part of the jigsaw that makes it work is the consoles have the optimised API for storage performance which I believe will fix the issues on PC if MS ever get round to launching it. So I think its a combination of increasing VRAM and working on optimisation for asset streaming, as we not going to be able to get rid of streaming in modern games, it was I think still happening on the 3090 tests just much less aggressively. From the UE4 debug log in an area that caused a stutter every single time we seen, it would load the textures for next areas as approaching it, on the 3080 it then flushed shaders to make room after loading about 30% of the textures, and proceeded to reload those same textures again, then right after reloaded the shaders again, whilst the 3090 just loaded the new textures once and that was it. There was a nasty inefficient chain reaction.

I can justify my belief in directstorage in fixing this, I forgot to put in my post that I discovered if I either reduced the threading for texture loading on UE4 settings, or throttled the texture loading speed, the stutters stopped or became micro stutters, so there is a bottleneck when you try to load data too quick on existing APIs, I also discovered moving the game from a NVME to a spindle had the same effect. However it wasnt cost free, loading screens were much slower and was more pop ins plus textures took longer to hit max LOD when moving towards them.

According to DF's FF7R PS5 review the resolution for the vast majority of the time stays locked. They gave it massive praise and were genuinely surprised how bad it was on PC.

I would be open to a system been introduced for upgrading VRAM on an existing card. Like one can do with system RAM. Either via empty slottable VRAM sockets, or using NVLINK for a low end card to be hooked alongside just to benefit from using it as a VRAM expansion.
Will defo be lighter but I imagine it would still be using at least a couple of GB alone for background stuff especially if people have other things running, do the current gen consoles still force close background apps when launching a new game? I remember my ps 4 pro (obviously completely different hardware compared to the ps5 though) would ask me to fully exit/force close said background apps before launching a game.

Another factor is peoples hardware combo and what they have running, for example, in my resident evil village (another title that supposedly benefits from more vram) 3080 4k gameplay video, there were zero hitches/stutters where as bang4bucks footage with his 3090 had a couple of severe frame latency spikes (in the same area and game was running/.recording for the same length of time), obviously his setup in theory should have not encountered these but for whatever reason(s), it did... if this was the other way round, you can guarantee people would automatically be jumping on the "not enough vram" bandwagon. Also a lot of the time, the issues are self-inflicted too i.e. resize bar on nvidia, people forcing it on with 3rd party tools when it might cause issues (this caused my low fps issue in FC 6 even at 1440P but with it off, no issues) or/and maxing out shader cache option to utilize vram that isn't there.

I think we have 2 games coming soon utilizing direct storage but obviously still waiting on is it windows or/and drivers end to enable this? I do think it is going to be a big game changer for how games are designed as well as the kind of hardware specs that we should be focusing on in the future.
 
Last edited:
Back
Top Bottom