• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA 4000 Series

I dont understand 1 thing - taking pure increase in raster power(around 1.5x more shaders and 1.5x higher clocks vs 3090ti), 4090 should be around 100%-125% faster than 3090ti. While even Nv shows somewhere between 70-50% for raster-only games.

Anybody care to guess why 4090 seems to be underperforming?
 
It isn't 40x0 that has put up prices, it happened with the pricing of stuff like the 3090 Ti. If people are buying at those prices, why wouldn't they build cards to that price point?
Arguably it started happening with the 10 series (6 years ago) as that's when Nvidia profits started climbing from a little over $2bn a year to the heady highs of more than $15bn per year.
I dont understand 1 thing - taking pure increase in raster power(around 1.5x more shaders and 1.5x higher clocks vs 3090ti), 4090 should be around 100%-125% faster than 3090ti. While even Nv shows somewhere between 70-50% for raster-only games.

Anybody care to guess why 4090 seems to be underperforming?
My guess would be because they've used a lot of the silicon budget on their Deep Learning, AI, fake it until you make it, ray tracing stuff.
 
Last edited:
I dont understand 1 thing - taking pure increase in raster power(around 1.5x more shaders and 1.5x higher clocks vs 3090ti), 4090 should be around 100%-125% faster than 3090ti. While even Nv shows somewhere between 70-50% for raster-only games.

Anybody care to guess why 4090 seems to be underperforming?

It's got something like 60% more cuda cores that don't translate to a simple 60% improvement so that's why it could be less than that % improvement.
 
It's got something like 60% more cuda cores that don't translate to a simple 60% improvement so that's why it could be less than that % improvement.
Yes, but look at the increase in MHz - 3090ti was boosting to ~1800Mhz. 4090 is supposed to boost to 2500Mhz. That's 1.38x. 1.38x*1.6x =2.24x
So something is not right here. Either the advertised boost is rarely achieved due to being severly power limited or memory bandwith is a major bottleneck.
 
I dont understand 1 thing - taking pure increase in raster power(around 1.5x more shaders and 1.5x higher clocks vs 3090ti), 4090 should be around 100%-125% faster than 3090ti. While even Nv shows somewhere between 70-50% for raster-only games.

Anybody care to guess why 4090 seems to be underperforming?

1% more cores doesn't mean 1% more gaming performance, especially true for Nvidia - the 60% extra cores only probably accounts for 30% or 40% higher frame rate in games, the higher clocks probably account for another 30% and then depending on how bottlenecked the game is, the SER adds another 5% to 35% extra performance - this why RTX4090 has a wide performance range, one game can be just 50% faster and another can be 90% faster


Also Nvidia chose to benchmark AC Valhalla, a game that prefers AMD GPUs and it's the worst performer Nvidia showed off with the 4090 just 50% faster, this one is likely a worst case scenario
 
Last edited:
Next time, Nvidia should announce their lower end cards first. Then people would be like OMG! 3090Ti performance at 3080 prices!? This is amazeballs!

It isn't 40x0 that has put up prices, it happened with the pricing of stuff like the 3090 Ti. If people are buying at those prices, why wouldn't they build cards to that price point?
The only problem is that this gen your not going to get 3090ti performance for £650, the 4070 which is likely to be priced at that level now will only just match a 2 year old 3080. The cards are not only being priced higher but the actual performance of all but the 4090 is weaksauce.
 
1% more cores doesn't mean 1% more gaming performance, especially true for Nvidia - the 60% extra cores only probably accounts for 20% or 30% higher frame rate in games, the higher clocks probably account for another 40% and then depending on how bottlenecked the game is, the SER adds another 5% to 35% extra performance - this why RTX4090 has a wide performance range, one game can be just 50% faster and another can be 90% fafaster
You are probably right, I got back and checked 3090ti vs 3090. Theoretically around 15% faster on stock, but it only translated to 9% more performance. So the % you mentioned will explain it.
Thanks
 
Except that when you look at DF's video comparing this exact scenario, the DLSS3 footage still looks smoother - i.e DLSS3 at 120fps looks smoother than native 60fps


I'm still not 100% sold yet..... however, I do think it could be promising if no severe issues. No doubt it does look far smoother (and isn't this what we all want when gaming hence high refresh rate displays and wanting to push high FPS?) and the quality/motion looks pretty good, unlikely you would notice the issues in those clips outside of analytical footage (certainly not dlss 1/fsr 2 bad). I didn't mind soap opera settings on TV but there definitely are 2 possible weaknesses for now:

- image artefacts, there is certainly going to be some issues on this, question is how often and how noticeable will it be
- latency, imo, will no doubt be higher latency than without it but again, question is how much latency are we going to be talking about? Ideally you want to keep latency <20ms but if it will be >50ms then you have a problem and obviosuly for PVP FPS, you'll want as little as possible

I suspect it will be a year until we see the kinks ironed out as has happened with other techs, see dlss 1, fsr 1 and 2.

People will no doubt say "fake frames/performance" etc. but like I always say, I couldn't care less how it is done, as long as the end result works and looks good. Have to accept that going forward we will be paying more for the software aspects than just hardware now, AI/ML is going to be a big factor.
 
Doesn't DLSS stretch an image supersampled to to the preferred resolution? therefore small text becomes blurred? Higher frames for cost of image quality.

MSFS 2020 for example they didn't show any images of the instrument panels, and DLSS has been known to cause them to blur from the new update. I'm sure we will see the impact to games Vs image quality of small texts and jagged edges.
 
I asked the Asus rep on FB and was told more info will be released later so probably under embargo until near release , i assume the strix will be up there considering the 3090 strix could go up to 480w ( 520w with a bios ) ... will probably be 4 x 8 pin pcie connectors via an adapter.

Even the base AIB cards seem to have 4 x 8 pin pcie connectors anyway. Perhaps we will see a high end Asus card with 5 connectors???
 
Rumours are that Nvidia is limiting cards in BIOS to around 600w anyway, so dont expect 2x16 pin connectors as they'll be useless anyway.
 
Last edited:
both the new Asus cards (TUF and Strix) use the new 16-pin power connector. Thus, Asus are recommending you also buy one of their newly launched PCIE 5.0 PSUs for 300 sheets :D
 
I dont understand 1 thing - taking pure increase in raster power(around 1.5x more shaders and 1.5x higher clocks vs 3090ti), 4090 should be around 100%-125% faster than 3090ti. While even Nv shows somewhere between 70-50% for raster-only games.

Anybody care to guess why 4090 seems to be underperforming?

bANDWIDTH IS THE SAME IN BOTH CARDS THOUGH ISNT IT?
 
Doesn't DLSS stretch an image supersampled to to the preferred resolution? therefore small text becomes blurred? Higher frames for cost of image quality.

MSFS 2020 for example they didn't show any images of the instrument panels, and DLSS has been known to cause them to blur from the new update. I'm sure we will see the impact to games Vs image quality of small texts and jagged edges.
DLSS resolution scaling is quite well understood by now - it's not stretching, but generating additional pixels as predicted by a machine learning model. If the model predicts text accurately then it doesn't need to lead to blurring. It doesn't work even close to perfectly of course, but the potential is better than purely upscaling from information that's in the current frame or with the addition of motion vectors.
 
Last edited:
You mean 4080 16GB, NVIDIA has posted all the MSRP's up and it starts from £1269, so again AIB high-end OC models will cost more:

4080 12G from £949 and again AIB high-end OC models will cost more.

MSRP is for entry level SKU / FE card.

Also remember the 4080 12G and 4080 16G are very different cards, the 16G version is not just 4GB extra memory, it is 256-bit VS 192-bit and also 9728 VS 7680 Cuda Cores, so the 16GB is a far more powerful card.

The crap one is not a 4080 though. It's a 4070.
 
Back
Top Bottom