• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

Along with such definite information as 1x, 2x, 3x...

At least it isn't "up to" figures...... :cry:

EDIT:

Interesting they done 1440 here and not the go to of 4k so to me indicates they are "marketing" this as a 1440p gpu more than a 4k one, makes sense and pretty obvious what this gpu should be aimed at tbf.
 
Last edited:
Ha! True. Although if you don't know what 1x is, any comparison is pointless. The 4070ti might only play CP2077 at 5fps with those settings.

And a pre release build. The drivers may (probably not) completely bork it.

These marketing charts aren't worth the space on the Internet. This is true of any pre release product from any vendor.
 
Lovelace efficiency is being put to full use in the laptop line up


This is something I haven't seen happen before; the laptop rtx4090 is on par with the performance of the desktop rtx4070ti and it uses 100w less than the 4070ti. How did Nvidia achieve this? The laptop 4090 has significantly more cores than the 4070ti but the clocks speeds are significantly lower; the clock speed has a greater impact on power draw rather than core count so the reduction in clock speed didn't just allow for adding enough cores to match desktop performance but reduced TDP by a further 100w

Tldr; the 150w laptop 4090 has the same performance as the 250w desktop 4070ti

https://videocardz.com/newz/nvidia-...-9728-cuda-cores-faster-than-desktop-rtx-3090
 
Last edited:
Comparing performance with fake frames turned on when only 1 GPU supports it isn't misleading at all /s.
Yeah - I *hate* this marketing. All that chart is telling us is that MSFS is CPU-bottlenecked at 1440p on the 3080 and that the RT hardware in the 40-series is twice as fast as the 30-series (nothing new there) - the Warhammer one is actually the odd one out since that's also going to be using Frame Generation on 40-series so the uplift against the 3080 here is kinda meh - CPU again maybe? :confused:
 
Last edited:
Yeah - I *hate* this marketing. All that chart is telling us is that MSFS is CPU-bottlenecked at 1440p on the 3080 and that the RT hardware in the 40-series is twice as fast as the 30-series (nothing new there) - the Warhammer one is actually the odd one out since that's also going to be using Frame Generation on 40-series so the uplift against the 3080 here is kinda meh - CPU again maybe? :confused:

Warhammer is also pretty cpu bound and the RT is quite intensive too (reflections and GI)
 
Warhammer is also pretty cpu bound and the RT is quite intensive too (reflections and GI)
Which is weird because typically in the heaviest RT workloads (really, only Cyberpunk at the moment), Ada is 2x Ampere's performance (with DLSS Frame Generation 'doubling' whatever the final output is) - which is why the Warhammer result is so underwhelming - it's seemingly not taking much advantage of Ada's RT improvements nor is it getting the typical doubling of fps of Frame Generation.
 
Which is weird because typically in the heaviest RT workloads (really, only Cyberpunk at the moment), Ada is 2x Ampere's performance (with DLSS Frame Generation 'doubling' whatever the final output is) - which is why the Warhammer result is so underwhelming - it's seemingly not taking much advantage of Ada's RT improvements nor is it getting the typical doubling of fps of Frame Generation.

Warhammer isn't using SER for 40xx RT optimisations (CP "overdrive" RT mode has it) but yeah even without that it is a bit weird.
 
This is something I haven't seen happen before; the laptop rtx4090 is on par with the performance of the desktop rtx4070ti and it uses 100w less than the 4070ti. How did Nvidia achieve this?
There's more to performance than just frequency-power, you can also spend more die area and optimise for efficiency (plus bin further). You'll get a chip that costs a lot more than a smaller one run hotter, but is more efficient for the performance.

But let's see if the performance can be sustained in the real world - there are often additional TDP caps so power hungry games end up running at even lower frequencies.
 
Last edited:
Say what you will about him. I swear Nexus has a degree in graphics stuff, I see these options in the menu, have no idea what they are other than I turn them high when I can.
It's been one of these things I've promise to teach myself and still haven't done like when I see multiple options for AA and I have no idea which one is better.
 
Last edited:
  • Haha
Reactions: TNA
After having time to play around with my 4090 Strix OC a bit over the holidays, I’m wishing I’d just gone with a cheaper version this time around. I can’t get passed 1400 on the memory which is a little disappointing. I haven’t noticed any coil whine though and the card stays very cool even over 3000mhz so it’s just really holding back my Port Royal score at just under 28000. If anyone is on the fence just go with a founders if you can get one, and save some cash.
 
After having time to play around with my 4090 Strix OC a bit over the holidays, I’m wishing I’d just gone with a cheaper version this time around. I can’t get passed 1400 on the memory which is a little disappointing. I haven’t noticed any coil whine though and the card stays very cool even over 3000mhz so it’s just really holding back my Port Royal score at just under 28000. If anyone is on the fence just go with a founders if you can get one, and save some cash.

I don't mean this in a snarky way, but I think most people on here didn't need convincing that a strix was never worth the extra over the founders edition or probably any of the other cheaper cards.
 
Back
Top Bottom