• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

From what I've seen so far even in games that have real 4k textures, of which there are very few, 24gb is enough and in current games a 8k resolution is enough. 24gb would not be enough if games had real 8k textures but so far no games do.


Overall I'd say 16gb is enough for high end GPUs in 2024. I don't see more than that being mandatory until next Gen consoles. When next gen consoles do arrive in 2027 then I'd guess that 32gb vram will be what is recommended for high end GPUs.

This is what I believe will happen after next gen consoles launch in a few years

Entry GPU: 16gb vram
Mid range range GPU: 24gb vram
High end 32gb
Ultra high end: 32gb or more if just for lulz but I don't see more than 32gb being required by any games even if gaming on an 8k screen with 8k texture packs
 
Last edited:
From what I've seen so far even in games that have real 4k textures, of which there are very few, 24gb is enough and in current games a 8k resolution is enough. 24gb would not be enough if games had real 8k textures but so far no games do.
I played Starfield with 4k texture mods (60GB worth) and now play Cyberpunk with 4K upscaled textures, Starfield used up to 20GB of VRAM at times after that, and Cyberpunk doesn't use more than 13GB for the game process (NOT total allocation). this just highlights how efficient optimisation goes a long way for memory use. Starfield's textures weren't all being loaded at once either, like on Akila City just staring at an empty wall had high VRAM usage, considering everything is behind a loading screen, WHY is it using so much VRAM? But then you go to Neon with the same 60GB of texture mods and the game's process sits at under 10GB VRAM use. This is all at 5160x2160 btw and with or without DLSS makes a tiny difference to VRAM use, not even a 2GB delta.

Texture compression is used all over the place anyway and the GPU is able to decompress super quick without issues.

Since 5160x2160 is over normal 4K, with a normal 4K 16:9 out of the box, basically all games will be fine with 16GB VRAM at any setting. The only exceptions will be poorly optimised games, which.... seem to be plentiful these days.
 
Last edited:
Compared to the only other game with path tracing, Night City is a world full of things to blow up and shoot. I'm talking about Alan Wake of course. Not including Portal RTX and stuff as they aren't fully finished releases but more tech previews whereas AW2 and CP2077 are release products with matured path tracing.

As for light sources can't be destroyed, this is not true, many can't be, but others can:


Strangely the street lampposts down in the city itself blow out without the dimming delay like above out in the hills, maybe spacetime continuum or some such :eek:

Quote my entire post, I did say there that street lights and headlights can be destroyed :p

In the image below for example, you can detonate a grenade there, take a machine gun and go nuts, whatever, you can't destroy anything, the lights are still on. On so are on the billboards. You can run down street lights and damage or turn off/on your headlights. That it. Even V doesn't have a reflection :)) From this point of view, Metro EE is a lot better.

I'd add to that that some minor floodlights that I just saw can be damaged as well and don't come back on or you can shut them off. I can't remember of something else now.
Blow up or destroy things? Hardly. And this isn't to be compared with AW2, that's a different type of game; Mafia and GTA (perhaps Far Cry?) would be more likely candidates for comparison in this regard...

They're still cutting corners like AW2 does (or haven't bother to implement all in properly), but to a different extent. Some things will react quicker than others to changes, sadly no way to improve this via settings.

Some silly stuff: shot a candle, went away, the flame remains in the air and lights up the place still :)) Or that your reflection is rendered on surfaces that are very rough, but not on clean ones so you get funny "artifacts" on surfaces that have both.

But overall is still a very nice showing. Probably the best.

 
  • Like
Reactions: mrk
I played Starfield with 4k texture mods (60GB worth) and now play Cyberpunk with 4K upscaled textures, Starfield used up to 20GB of VRAM at times after that, and Cyberpunk doesn't use more than 13GB for the game process (NOT total allocation). this just highlights how efficient optimisation goes a long way for memory use. Starfield's textures weren't all being loaded at once either, like on Akila City just staring at an empty wall had high VRAM usage, considering everything is behind a loading screen, WHY is it using so much VRAM? But then you go to Neon with the same 60GB of texture mods and the game's process sits at under 10GB VRAM use. This is all at 5160x2160 btw and with or without DLSS makes a tiny difference to VRAM use, not even a 2GB delta.

Texture compression is used all over the place anyway and the GPU is able to decompress super quick without issues.

Since 5160x2160 is over normal 4K, with a normal 4K 16:9 out of the box, basically all games will be fine with 16GB VRAM at any setting. The only exceptions will be poorly optimised games, which.... seem to be plentiful these days.

I suspect you mean 5120x2160. What monitor are you using and what DLSS setting in CB?
 
Well they have learned FOMO sells more GPUs now than reviews... So have some retailers..... :rolleyes:

Never fails to amaze that new gpu launches always have some type of 'shortage' associated with them. Cue retailers drooling as they know it gives them free reign to crank the pricing notch (as they have done for literally years now at this point). To say this is getting old would be an understatement to say the least.
 
I suspect you mean 5120x2160. What monitor are you using and what DLSS setting in CB?
Nope 5160x2160:

QzBzpWc.png


On the AW3423DW. At 4K or above the DLSS setting is Performance which is what it should be for the best balance of fps and image clarity.
 
Nope 5160x2160:

QzBzpWc.png


On the AW3423DW. At 4K or above the DLSS setting is Performance which is what it should be for the best balance of fps and image clarity.

Ah I see. Never really played with Super Resolution but isn't that just upscaling with DLSS just to downscale again to the res of the monitor? What's the benefit in that?
 
There is no downscaling back to native res after, it renders at 5160x2160 if dlss is off or dlaa is used, if dlss is used then it of course renders internally at a lower res depending on the dlss mode used, then outputs at the selected output res (5160x2160).

It's purely a feature for a game by game basis as not all games look any different using dldsr, but many do as you're cramming a much denser output resolution onto a 3440x1440 native monitor. Ok the ppi won't be as high as a native 5120x2160 ultrawide, but it's good enough and gives a crisper image depending on the game. It can also be used to mitigate a CPU limitation if a particular game is unoptimised for the CPU and the GPU is basically walking along aimlessly, gives it some work to do whilst relieving the CPU whilst at the same time delivering a better image output.
 
I played Starfield with 4k texture mods (60GB worth) and now play Cyberpunk with 4K upscaled textures, Starfield used up to 20GB of VRAM at times after that, and Cyberpunk doesn't use more than 13GB for the game process (NOT total allocation). this just highlights how efficient optimisation goes a long way for memory use. Starfield's textures weren't all being loaded at once either, like on Akila City just staring at an empty wall had high VRAM usage, considering everything is behind a loading screen, WHY is it using so much VRAM? But then you go to Neon with the same 60GB of texture mods and the game's process sits at under 10GB VRAM use. This is all at 5160x2160 btw and with or without DLSS makes a tiny difference to VRAM use, not even a 2GB delta.

Texture compression is used all over the place anyway and the GPU is able to decompress super quick without issues.

Since 5160x2160 is over normal 4K, with a normal 4K 16:9 out of the box, basically all games will be fine with 16GB VRAM at any setting. The only exceptions will be poorly optimised games, which.... seem to be plentiful these days.

Bet you are talking allocated vram again :cry:
 
There is no downscaling back to native res after, it renders at 5160x2160 if dlss is off or dlaa is used, if dlss is used then it of course renders internally at a lower res depending on the dlss mode used, then outputs at the selected output res (5160x2160).

It's purely a feature for a game by game basis as not all games look any different using dldsr, but many do as you're cramming a much denser output resolution onto a 3440x1440 native monitor. Ok the ppi won't be as high as a native 5120x2160 ultrawide, but it's good enough and gives a crisper image depending on the game. It can also be used to mitigate a CPU limitation if a particular game is unoptimised for the CPU and the GPU is basically walking along aimlessly, gives it some work to do whilst relieving the CPU whilst at the same time delivering a better image output.

When I said "downscaling", I meant the monitor itself which, even if it's fed a 5160x2160 signal, has to downscale it to display it.
 
When I said "downscaling", I meant the monitor itself which, even if it's fed a 5160x2160 signal, has to downscale it to display it.
Ah ok yeah that's fair, the scaling factor still makes a visible difference to image quality as per my comment, packing in dense pixels into a smaller screenspace, whther visually scaled by the GPU or display (setting in NVCP) doesn't matter though. That generally matters more when you're outputting a smaller resolution on a larger native res display like 2560x1080 on a 3440x1440, this is where you'd want a built in scaler doing all the work on the monitor for the best quality. That memory goes back to the early days of 34" ultrawide when GPUs were not powerful enough to run native output res so had to use one res down for 21:9 and hope the scaler on the monitor was decent :p

Bet you are talking allocated vram again :cry:
Probably bait as usual :p But RTSS's "GPU VRAM Usage" for both total and /process are actual usage, not allocation. Unwinder confirmed this (iirc) when the feature got put into MSIAB way back, and there's been discussion on it around reddit. The only variation appears to be between UWP based apps/games and native win32 games. But others have done the comparisons and the updated VRAM usage metric aligns with game dev tools performance overlays like in MSFS2020 where the VRAM use matches RTSS with only slight variances which is to be expected as the polling between both is different rates.
 
Last edited:
Probably bait as usual :p But RTSS's "GPU VRAM Usage" for both total and /process are actual usage, not allocation. Unwinder confirmed this (iirc) when the feature got put into MSIAB way back, and there's been discussion on it around reddit. The only variation appears to be between UWP based apps/games and native win32 games. But others have done the comparisons and the updated VRAM usage metric aligns with game dev tools performance overlays like in MSFS2020 where the VRAM use matches RTSS with only slight variances which is to be expected as the polling between both is different rates.

Not used RTSS in a while, so can't confirm. But that's not how it used to be. @Nexus18 is he right? :p
 
The simplest way now as of the Win11 Creators Update is task manager though, it shows both allocated/requested and actual used:

"The memory information displayed comes directly from the GPU video memory manager (VidMm) and represents the amount of memory currently in use (not the amount requested). Because these are exposed from VidMm this information is accurate for any application using graphics memory, including DX9, 11, 12, OpenGL, CUDA, etc apps.

Under the performance tab you’ll find both dedicated memory usage as well as shared memory usage.

Dedicated memory represents memory that is exclusively reserved for use by the GPU and is managed by VidMm. On discrete GPUs this is your VRAM. On integrated GPUs, this is the amount of system memory that is reserved for graphics. (Note that most integrated GPUs typically use shared memory because it is more efficient).

Shared memory represents system memory that can be used by the GPU. Shared memory can be used by the CPU when needed or as “video memory” for the GPU when needed.

If you look under the details tab, there is a breakdown of GPU memory by process. This number represents the total amount of memory used by that process. The sum of the memory used by all processes may be higher than the overall GPU memory because graphics memory can be shared across processes."


I guess that reddit post should be stickied in that VRAM thread @Nexus18 :cry: - It provides crystal clear clarification now, especially:

What this Post is:

- You can see your own VRAM usage, anytime


What this statement isn't:

- '10GB GDDR6X VRAM isn't enough'

- 'MOST current games at 4k use over 8gb VRAM' (they don't)

:D

Edit* Check it out:

UroVABn.png


y9R691A.png


So for me what that shows is that over 12GB of VRAM is being used even though the game process is only using 11.1GB in that moment. Windows DWM, file explorer, and Firefox are taking up over 1GB of VRAM just being idle lol.
It's a good way to measure your memory usage in gaming tbh and determine if you could make your gaming experience more efficient if it means the OS has to do less thrashing around between virtual memory and actual memory due to lower physical memory figures whether VRAM or system RAM.
 
Last edited:

Guess RTX 4070, RTX 4070 SUPER, RTX 4070 Ti SUPER and RTX 4080 SUPER will be in shortage supply worldwide so... no price cuts?

If there were shortages they wouldn't bother releasing new cards in the first place. They would just sell the current ones at the current price!
 
Last edited:
Maybe using next gen ai tools I can have a crack at developing a demo for a few game ideas I have in the future :)
Well, if Epic really wanted, they could put out a very simple version of UE5 with assets (and an asset market) for all users to make their own games.
There are already editors out there that make it work for a lot of stuff. It will be basically like modding a game or make a new map like "old school" games had. Adding new assets would also be easy and not as much performance problematic as now since will be handled through nanite. Take 15-20% of that by selling on their store. Easy money. :)

That will be the biggest revolution in gaming ever.

From what I've seen so far even in games that have real 4k textures, of which there are very few, 24gb is enough and in current games a 8k resolution is enough. 24gb would not be enough if games had real 8k textures but so far no games do.


Overall I'd say 16gb is enough for high end GPUs in 2024. I don't see more than that being mandatory until next Gen consoles. When next gen consoles do arrive in 2027 then I'd guess that 32gb vram will be what is recommended for high end GPUs.

This is what I believe will happen after next gen consoles launch in a few years

Entry GPU: 16gb vram
Mid range range GPU: 24gb vram
High end 32gb
Ultra high end: 32gb or more if just for lulz but I don't see more than 32gb being required by any games even if gaming on an 8k screen with 8k texture packs

8k will be a pipe dream for a very long time when it comes to something demanding.
 
  • Like
Reactions: TNA
Back
Top Bottom