• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Blackwell gpus



Shocked and surprised, said nobody ever.

yea I have a 12700k and as considering upgrading, but look at benchmarks even compared to a 9800x3d at 4k, and in non gaming stuff I think the 12700k slapped it.

There is no reason to upgrade 12700K/F to something newer when you have a 4090, in the same segment and don't see a viable upgrade that shows money's worth in performance gain from the 12700K at 4K output. It is diminishing returns 101 at this point so just look at other things that provide an actual benefit, like a superior OLED monitor etc.

I just bought a new phone so won't be looking at graphics HW for a long time again anyway :p
 
Last edited:
There was talk a while ago about that only the 2GB GDDR7 memory modules would be available for the Blackwell release and that the 3GB ones would come at a later date maybe for a Super refresh. This could explain the reason why the VRAM seems to be following Ada though a 5090 would need 16 2GB memory modules to make 32GB.

I done the maths quickly after posting a 5090 with 3GB GDDR7 modules could be 24/27/30GB/ etc...
 
Last edited:
The best upgrade for a 4090 is probably a faster cpu. There must be loads that bought that GPU and still rock an older CPU. As FG and upscaling are still on the table (no lockouts like ampere) the 4090 is plenty fast. Unless Jensen in January says the 40 series cannot use X or Y as you need to be on the special sauce Blackwell only...
I’ve just swapped a 5700X3D for a 9800X3D and even now I some games are still bottlenecked on a 4090, CPUs have only advanced 10% in the past 2 years so a gpu that is 50% faster than a 4090 isn’t going to even be fully utilised.
 
I’ve just swapped a 5700X3D for a 9800X3D and even now I some games are still bottlenecked on a 4090, CPUs have only advanced 10% in the past 2 years so a gpu that is 50% faster than a 4090 isn’t going to even be fully utilised.

Seems though only a few are digesting this and many think they need a 5090 already..
 
I’ve just swapped a 5700X3D for a 9800X3D and even now I some games are still bottlenecked on a 4090, CPUs have only advanced 10% in the past 2 years so a gpu that is 50% faster than a 4090 isn’t going to even be fully utilised.
Imagen if Nvidia went with a hardware scheduler instead of a software one and got rid of that overhead in their drivers.
 
Seems though only a few are digesting this and many think they need a 5090 already..
Yeh, at 4k I had been free from cpu bottlenecks for years until the 4090. It was my first gpu that I started to see cpu bottlenecks creep in on a previous gen top end cpu. 5950x. I'm really interested to see how the bottlenecks show with 5090 and beyond.
 
Imagen if Nvidia went with a hardware scheduler instead of a software one and got rid of that overhead in their drivers.
Personally do not see the CPU issues in games as an actual issue for those on higher end GPUs that see the effect of this in many games. It just means you can up the GFX settings more and not lose much performance.

Also remember it's not the CPU being what it is that is bottlenecking the vast majority of the time, it's the CPU limited lack of optimisation in games that is causing it. The issue sits firmly with the developer to remedy by patching their games and making sure they don't launch in such a state.

Right now STALKER 2 has this very issue. You can use DLAA at 4K and get within 10fps variance by dropping to DLSS Quality and 20fps variance using DLSS Performance. Obviously the lower internal render puts more bearing on the CPU which is expected, so the fact that such a small fps variance exists between DLSS and any native AA mode points to lack of proper CPU optimisation, not the CPU being the physical bottleneck.

We have seen it countless times before and thsoe games being patched and suddenly 30-40% performance gains are seen. Last of Us Part 1 anyone?
 
Last edited:
Imagen if Nvidia went with a hardware scheduler instead of a software one and got rid of that overhead in their drivers.
nvidias architecture doesnt need a complex scheduler, they dont have multipurpose simds like amd and dont require complex pipeline mgmt, their architecture is basically a simple layout of dedicated single purpose functional units unlike amd which uses the functional unit for doing different tasks depending on the clock cycle.. nvidia still supports instruction batching, so dont think its going to be a big benefit

maybe they could introduce an ai based scheduler that does an early purge of instructions that wont end up as pixels on screen
 
I’ve just swapped a 5700X3D for a 9800X3D and even now I some games are still bottlenecked on a 4090, CPUs have only advanced 10% in the past 2 years so a gpu that is 50% faster than a 4090 isn’t going to even be fully utilised.
anyone remember those dual cpu motherboards back in the day.. maybe it's time for a return
 
anyone remember those dual cpu motherboards back in the day.. maybe it's time for a return

Server boards routinely have two or sometimes even 4 CPU sockets, which is basically what the absolute ballers were running as personal PCs back in the day. But although the stats looked great, performance in games never came close to justifying the expense, so they faded away for personal use.

I think these days you can get server-class CPUs with so many cores, RAM channels and PCI-E lanes, that more than one socket is a bit redundant anyway, for all but insane workstation/AI workloads anyway. They still probably don't do as well in games as a 9800X3D would.

It's hard not to drool at some of these beasts though. But if you have to ask the price, you can't afford it!

A 192-core, 500W TDP CPU!

How much RAM do you want? Yes.
 
Back
Top Bottom