• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Blackwell gpus

I think nvidia offer this already using the geforce experience - where it detects your game in the library then suggests what the 'optimal' configuration is for that game. An enthusiast forum like this I see people not using this as they want to customise it themselves and the bloat is another factor in why its not popular.
It's very hit and miss. The new Nvidia app doesn't even detect what games I've got installed.

It just really grinds my gears every time a new PC game is about to come out and the system requirements chart is posted to Reddit or Twitter or something and everyone's like "OMG MY PC CAN ONLY PLAY MEDIUM WTF UNOPTIMISED TRASH DEVS" without even knowing what 'medium' looks like. They're so affixed to labels it's just insane to me.
 
Last edited:
I think nvidia offer this already using the geforce experience - where it detects your game in the library then suggests what the 'optimal' configuration is for that game. An enthusiast forum like this I see people not using this as they want to customise it themselves and the bloat is another factor in why its not popular.

They don’t mean auto optimisation, it detects you have a 4060 and renames medium to “ultra” and locks high and ultra out entirely. That way you delude. Yourself you are playing at the highest settings.

Well that is until @mrk posts a dozen screenshots of his 5090 showing actual “ultra” :D
 
Last edited:
We've heard about "compression" magic for literally 20 years, most often before new GPU launches when there's gnashing of teeth over whether the new models' VRAM is enough. The topic i.e. the supposition that nv/ati/amd would somehow save the day with compression has come up notably before the 4870 (256mb), 780/ti, 980, 290x and Fiji/Fury launches. Further back, nV even released a late-cycle 7800 512mb version to placate enthusiasts (and capitalize on 256mb not being enough. It never, ever comes to anything, and shortly after, people are rending their cloths over having bought a card with too little memory. As for "DDR7 access speeds", if mem speed was a solution HBM would have solved this.

Compression does work, but it's expensive from a performance perspective -- and no magic sauce will fix surmount the albatross that decompressing takes time/cycles. Even if there was a solution here -- which there isn't -- nV is in fact motivated to not impliment it as they're using the just-enough VRAM spec as a carrot to get consumers to upgrade once new models are released.
Interesting. So what effect does speed of the vram have if any? With GDDR7 been so expensive you'd hope it results in gains.
 
If people are actually "playing" the game.. do the "Ultra" / "High" / "Low" settings matter that much..

I don't play many games these days, but when I did, when actually playing I barely noticed the odd shadow or reflection and or it's quality!

I understand there are those that like to take screenshots, and have them look their absolute best.. but for those that actually play? do you move around slowly saying oooo and arrrrr at the fact they can see the slightly better reflections and shadows?

I guess for me the gameplay is king, if it plays well.. I don't overly care if the graphics are not ultra realistic.. and conversely if the game is utter pap, I don't care if it looks like each frame has been painted by a master, i'm just not going to continue playing..
 
Interesting. So what effect does speed of the vram have if any? With GDDR7 been so expensive you'd hope it results in gains.
I'm not sure if you're poking fun, but.... Memory speed primarily improves performance by streaming data (textures, shaders, and geometry) to the GPU which is then rendered in images and effects by said GPU. The faster the memory, the faster data is available to the GPU for rendering. There are other things that affect actual memory bandwidth than memory speed (and timings) e.g. bus width, but generally the faster the memory, the faster the GPU gets "fed" to render images.

Edit: Think of it as an old-time steam engine where you have a 3-square meter container for coal (your memory size), a coleman/shoveler (your memory speed), size of the shovel and aperture in the furnace (bus width) and the furnace (your GPU). You could maximize the amount of cole mass to fit in the container by reducing (compressing) the cole bit size, but you'd need to somehow reconstitute the cole bits before they reached the furnace.

With compression, you'd need to first compress the data in-mem, then run decompression for it to be passed to the GPU. The second part is where you invariably lose performance regardless of how good/efficient the decompression algos are.
 
Last edited:
Well that is until @mrk posts a dozen screenshots of his 5090 showing actual “ultra” :D

I am being attacked!

Don’t you mean:

BHtFVLb.jpeg
 
  • Wow
Reactions: mrk
Over the past decade, NVIDIA's flagship GPU prices have shown a notable upward trend. Here's a summary of the launch prices for their top-tier consumer graphics cards:

  • 2014: GeForce GTX 980 – $549
  • 2016: GeForce GTX 1080 – $599
  • 2020: GeForce RTX 3080 – $699
  • 2022: GeForce RTX 4080 – $1,199
  • 2025: Geforce RTX 5080 - $1,500 (rumoured)
When adjusted for inflation, the price increase remains significant.

For instance, the GTX 980's launch price of $549 in 2014 is approximately equivalent to $715 in today's dollars, still considerably lower than the RTX 4080's $1,199 launch price.

------------------------------------------------------------------------

 
Last edited:
I recall paying around £650-£700 for my 980 and remember at the time thinking that was crazy and I'd never spend more than that, I paid £1400 for my 2080Ti and didn't bat an eyelid paying £1900 for my 4090.
 
Last edited:
Well that is until @mrk posts a dozen screenshots of his 5090 showing actual “ultra” :D

His 5090 is already on it way
 
Last edited:
I'm not sure if you're poking fun, but.... Memory speed primarily improves performance by streaming data (textures, shaders, and geometry) to the GPU which is then rendered in images and effects by said GPU. The faster the memory, the faster data is available to the GPU for rendering. There are other things that affect actual memory bandwidth than memory speed (and timings) e.g. bus width, but generally the faster the memory, the faster the GPU gets "fed" to render images.

With compression, you'd need to first compress the data in-mem, then run decompression for it to be passed to the GPU. The second part is where you invariably lose performance regardless of how good/efficient the decompression algos are.

If the speed is faster would it not negate the need to store the data in the first place "just in case". So it just gets what it needs, when it needs it? So you wouldn't need 16gb+ at any one time?

I don't know I may be massively off the mark. It's a tough pill to swallow that Nvidia deliberately holds it back..Still it's hard to deny most of us would rather have last Gen vram, but in greater quantities. Its probably less for gaming reasons though and more not to cut into their workstation cards. Gaming is collateral damage.
 
If the speed is faster would it not negate the need to store the data in the first place "just in case". So it just gets what it needs, when it needs it? So you wouldn't need 16gb+ at any one time?
Not really, this assumes that the card knows each game's memory use implementation which would be impossible. The game might want to send everything it needs for the entire seemliness open world in one game, so needs to store at least 12GB for example if the settings being used were Ultra at 4K using DLSS, more for native. If the game sends 12GB+ of data to VRAM and there isn't enough VRAM, it has to spill over to system RAM. The benefit of GDDR7 would be that for data already in VRAM, that data will be processed faster obviously.
 

Not really, this assumes that the card knows each game's memory use implementation which would be impossible. The game might want to send everything it needs for the entire seemliness open world in one game, so needs to store at least 12GB for example if the settings being used were Ultra at 4K using DLSS, more for native. If the game sends 12GB+ of data to VRAM and there isn't enough VRAM, it has to spill over to system RAM. The benefit of GDDR7 would be that for data already in VRAM, that data will be processed faster obviously.

If a game needed it wouldn't a request be made and it fulfilled really quickly on DDR7 then when it is culled from the screen/area it would be flushed? A game must know what assets it's currently using in any given location? Even long distance vistas usually use really low textures until the player gets closer.
 
Last edited:
No one knows as no other GPU uses GDDR7 and no others are planned to other than RTX50. We will have to wait and see but if a game pushes out a certain chunk of data for VRAM, it's not going to slice that chunk up for the sake of GDDR7, the engine doesn't care, it will send out what the devs program it to send out to VRAM, if it spills to RAM, then so be it etc.
 
No one knows as no other GPU uses GDDR7 and no others are planned to other than RTX50. We will have to wait and see but if a game pushes out a certain chunk of data for VRAM, it's not going to slice that chunk up for the sake of GDDR7, the engine doesn't care, it will send out what the devs program it to send out to VRAM, if it spills to RAM, then so be it etc.

Yeah that's a good point about it doing something different just for GDDR7. I'm just coping in this cruel world as best as I can lol.
 
This is why I think Nvidia will pull the marketing spin tactic once again, they will claim that the lower VRAM on the lower non-Ti/Super RTX50 cards doesn't matter as GDDR7 is much faster, but the benchmarks from outlets will once again show that this is just PR speak. Then the Super/Ti models come out with more VRAM and whoaoh look, things are much better, though you then pay even more for that.

Meanwhile the 5090 pootles along thrashing everything in its own little world lol.
 
This is why I think Nvidia will pull the marketing spin tactic once again, they will claim that the lower VRAM on the lower non-Ti/Super RTX50 cards doesn't matter as GDDR7 is much faster, but the benchmarks from outlets will once again show that this is just PR speak. Then the Super/Ti models come out with more VRAM and whoaoh look, things are much better, though you then pay even more for that.

Meanwhile the 5090 pootles along thrashing everything in its own little world lol.

I submit. I'll join you in getting a 5090!
 
Back
Top Bottom