• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Blackwell gpus

Hogwarts legacy is a bugged game with a memory leak and constant traversal stutter that uses up most of the RAM/VRAM and has not and will not ever be fixed. They are now working on Hogwarts 2 and iirc a remaster of 1 - So expect any "fixes" to be in those instead given they gave up on the first game like Ubi did with Jedi Survivor.

More recent games though do efficiently make use of as much VRAM as needed and typically up to 14-16GB (4K upscaled) without needing to cache to system RAM which has a performance impact obviously. And don't forget an extra 2GB of VRAM is potentially needed if using tech like Frame Gen...
 
Last edited:
Hogwarts legacy is a bugged game with a memory leak that uses up most of the RAM/VRAM and has not and will not ever be fixed. They are now working on Hogwarts 2 and iirc a remaster of 1 - So expect any "fixes" to be in those instead given they gave up on the first game like Ubi did with Jedi Survivor.

More recent games though do efficiently make use of as much VRAM as needed and typically up to 14-16GB (4K upscaled) without needing to cache to system RAM which has a performance impact obviously. And don't forget an extra 2GB of VRAM is potentially needed if using tech like Frame Gen...
So 14-16gb games as of launch date with a hard cap of 16gb for all but one sku of the stack. :(
 
If a game needs more and a GPU has 16GB, in the case of the 5080, then it will just spill over to system RAM and the % lows will be impacted as a result with the likelihood of stutter being amplified. The 5080 may well be powerful, but power means nothing if the latest game engine requires the use of more VRAM to run its best (edit** at the highest settings). Of course a 3080 is fine at up to 1440p in most games. I had a 3080 Ti FE and played Cyberpunk with path tracing at 2560x1080 (with dlss performance) for example, though 12GB and above VRAM use is typical at 4K for the game alone, not including BG apps and Windows DWM.

Also many people forget that it's not just the game using VRAM, up to a couple of GB is used by Windows and background apps whilst gaming too.
 
Last edited:
If a game needs more and a GPU has 16GB, in the case of the 5080, then it will just spill over to system RAM and the % lows will be impacted as a result with the likelihood of stutter being amplified. The 5080 may well be powerful, but power means nothing if the latest game engine requires the use of more VRAM to run its best.

Also many people forget that it's not just the game using VRAM, up to a couple of GB is used by Windows and background apps whilst gaming too.

I'd be surprised if there isn't some new compression technology or the DDR7 access speeds negate the need for storing so much. We will find out soon.
 
Could well be, we will find out in coming weeks what's what. Though I suspect the only real advantage of RTX50 will be Neural rendering, with DLSS4 being launched with RTX50, but not exclusive to RTX50, though it probably will run faster on RTX50 because of the Neural stuff.
 
If a game needs more and a GPU has 16GB, in the case of the 5080, then it will just spill over to system RAM and the % lows will be impacted as a result with the likelihood of stutter being amplified. The 5080 may well be powerful, but power means nothing if the latest game engine requires the use of more VRAM to run its best.

Also many people forget that it's not just the game using VRAM, up to a couple of GB is used by Windows and background apps whilst gaming too.
also system ram has just recently entered 32gigs territory, i remember i was using 16 gigs till the 30 series and half of the 40 series, till games started listing 16 gigs as minimum/recommended which is a very recent phenomenon
while vram requirement has been growing at an exponential rate, its alarming, where do we stop
 
The flip side is even if you do have a 5090 with 32gb of vram if only one super expensive sku is going to have that why would devs spend time making high resolution textures that less than 1% of the target audience will be able to use. It's really a bad situation for everyone (for gaming).

That been said if the bar is reasonably 16gb because of the manufacturing side, games can be made with a clear single target on the higher resolutions/settings.
 
Remember the consoles run using DRS and upscaling as standard for the vast majority of games, And also in many games even low/medium settings on PC is higher fidelity than Quality on consoles.

The same reason Digital Foundry did a discussion on that very topic, because it futureproofs their game but because of the mentality of the PC gaming community these days, especially the younger age group of PC gamers who refuse to accept that they can't run a game at Ultra 4K because the option is available on their old GPU and will then moan about it online endlessly, some devs began to rename or hide settings.

Remember when Crysis launched, nobody could max it out and everybody enjoyed trying to, nobody moaned their GTX was being crippled, it became infamous for the right reasons and why the meme was born. That era is sadly never going to be repeated because of the mentality of the community now.
 
Last edited:
HU and Daniel Owen made videos this year addressing the topic. User interpretation can take this many different ways. Regarding Blackwell I think nvidia will release either Supers or special editions later that follow with more vram buffer levels if the community kick off over it.
I think everyone who needed an upgrade (usually from GTX 10 and RTx 20 series) did so with 30 series or 40 series. I’m not sure the appetite or demand is there for Blackwell and I expect Supers to be rolled out in 12 months with better vram and shaders for the lower end SKUs.
 
Remember when Crysis launched, nobody could max it out and everybody enjoyed trying to, nobody moaned their GTX was being crippled, it became infamous for the right reasons and why the meme was born. That era is sadly never going to be repeated because of the mentality of the community now.
I like demanding games but only if the game runs well on good hardware of the time, if it runs like ass then forget it.. I don't see the point in a game that's only going to run well on hardware released years after it comes out.
 
That's the wrong outlook though. Crysis still looked amazing at the time when not running on highest settings, for example. In fact almost all games released the last few years still look amazing on a mix of medium and high instead of slogging about at low fps on Ultra or Epic. Speaking of which, games using UE5 look almost identical on Ultra vs Epic whilst the Cinematic preset is still an option in many UE games even though it's a setting reserved for feature movies created using UE.

I've actually enjoyed going back to older games that I used to play on the 2070 Super and 3080 Ti now with a higher res and settings that were not possible before, both those old cards were peak performance for their time as well.
 
Yup, I really enjoy playing ‘older games’ on maxed settings. It allows you to just focus on playing the game, rather than faffing around waiting for the best compromise.
 
Honestly I think a lot of the whinging on the internet would be solved by a game detecting what card you have and adjusting the naming of the presets based on that.

People complain about only being able to play something like Indiana Jones on 'Low' settings when in fact the 'Low' setting is higher quality than the Xbox Series X runs it, but no, low = bad!

If the game detected the best you could run was low, then it would rename the low preset to 'ULTRA' and hide the higher ones, everyone would be happy and peace and harmony would return to the world :cry:
 
I'd be surprised if there isn't some new compression technology or the DDR7 access speeds negate the need for storing so much. We will find out soon.
We've heard about "compression" magic for literally 20 years, most often before new GPU launches when there's gnashing of teeth over whether the new models' VRAM is enough. The topic i.e. the supposition that nv/ati/amd would somehow save the day with compression has come up notably before the 4870 (256mb), 780/ti, 980, 290x and Fiji/Fury launches. Further back, nV even released a late-cycle 7800 512mb version to placate enthusiasts (and capitalize on 256mb not being enough. It never, ever comes to anything, and shortly after, people are rending their cloths over having bought a card with too little memory. As for "DDR7 access speeds", if mem speed was a solution HBM would have solved this.

Compression does work, but it's expensive from a performance perspective -- and no magic sauce will fix surmount the albatross that decompressing takes time/cycles. Even if there was a solution here -- which there isn't -- nV is in fact motivated to not impliment it as they're using the just-enough VRAM spec as a carrot to get consumers to upgrade once new models are released.
 
Last edited:
Honestly I think a lot of the whinging on the internet would be solved by a game detecting what card you have and adjusting the naming of the presets based on that.

I think nvidia offer this already using the geforce experience - where it detects your game in the library then suggests what the 'optimal' configuration is for that game. An enthusiast forum like this I see people not using this as they want to customise it themselves and the bloat is another factor in why its not popular.
 
Back
Top Bottom