• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Blackwell gpus

Nope. You may, if defect rates are high enough, collect enough defective dies to throw into another SKU but if there's imperfections in the finished product that can't be worked around by fusing off a part of the core it goes in the bin.

It's possible to give up some die area and design in some redundancy so you could work around a defect, e.g make each cache 2-3% bigger so you can fuse off any defects or even fabricate your dies with extra CUDA, RT, or Tensor cores so you can fuse off defective cores. However by doing so you're giving up valuable die area, the dimensions of the dies are set at the very beginning of the fabrication process and each SKU has to use dies of the same dimension (heatsink flat plate, board size, number of solder bumps, etc, etc).
i doubt that, consumer gpus dont have a native double precision pipeline built in, so a geforce card can never be used to train the same type of models
also, nvidia has pretty much outlawed the use of geforce drivers in datacenters, part of EULA.. so you cant be building out huge purpose built infrastructure with geforce cards
 
i doubt that, consumer gpus dont have a native double precision pipeline built in, so a geforce card can never be used to train the same type of models
also, nvidia has pretty much outlawed the use of geforce drivers in datacenters, part of EULA.. so you cant be building out huge purpose built infrastructure with geforce cards
I suspect you got that back-to-front. I was answering a question that asked if failed 'AI' dies could be used in a gaming card, not the other way around.
 
its goes both ways, failed AI dies will not be remoulded into consumer parts and immaculate consumer chips will not be shipped in DGX machines, both are different product lines as far as the foundational architecture goes

geforce is often used for homebrew type AI projects or.. like in cases where you would like to test convergence speed for a new algorithm or for conducting out of sample performance tests

i dont think there would be any devolution of SKUs between these product lines
 
No more than 1k, ideally around £800

Yep. £800 is the sweet spot for a 5080 really. But we will be lucky to get a 5070 Ti for that.

I don't really care about names anyway. Just give me around 4090 performance with 16GB vram for £800 and I will be happy. They can call it a 5060 Ti if they want :p

Just need to see an improvement in price for performance. Not more money buys you more performance. Enough of that already!
 
Some people will read my post (mrk keeps coming to mind :p) and say hah. Around 4090 performance for £800??

Hear me out here. We already have 4080 Super performance going for £900 today. Slap GDDR7 on a similar class GPU and you won't be that far of 4090 performance anyway no? Doesn't seem to unreasonable to expect to me. Other than greed.
 

take with pinch of salt but about 40-50% performance gain over 4090, which is roughly what most are expecting if assuming a conservative gain. I am hoping similar jump from 3090 to 4090 that was about 70%.
 
Last edited:
They should price the 5090 at £3000. As logic goes completely out the window for people buying these sort of cards anyway.

By what measure?

Top-end GPUs are a luxury purchase and whether that's "logical" is entirely up to the individual, based on their needs and disposable income.

Compared to other "luxury" purchases and hobbies, you could argue a GPU is relatively cheap. If you're into photography then the top-end bodies and lenses run to many thousands. Same for mountain bikes, golf clubs, watches, the list goes on.
 
Some people will read my post (mrk keeps coming to mind :p) and say hah. Around 4090 performance for £800??

Hear me out here. We already have 4080 Super performance going for £900 today. Slap GDDR7 on a similar class GPU and you won't be that far of 4090 performance anyway no? Doesn't seem to unreasonable to expect to me. Other than greed.

It's not just about memory though, it's about raw processing power too.

NVidia are going to struggle to match the processing power increase they achieved with the 40-series, which benefitted from a 50% reduction in process node, enabling much higher transistor counts and clock speeds. That won't happen this time so their ability to increase processing oomph will be more limited.

It seems to me that everything is instead focused on GDDR7 and increasing memory bandwidth as much as possible. Not just through the raw speed increase GDDR7 provides but by using wider bus widths, at least on the top model(s).

If we do get an "imbalanced" improvement with the 50-series, where memory bandwidth increases by a much bigger margin than processing power, this will mainly benefit higher resolutions such as 4K.

I predict that NVidia will market the new cards entirely as "4K monsters", providing very high framerates at 4K with all the eye candy enabled. The gains at lower resolutions, where memory bandwidth isn't as much of a factor, will be more modest.
 

take with pinch of salt but about 40-50% performance gain over 4090, which is roughly what most are expecting if assuming a conservative gain. I am hoping similar jump from 3090 to 4090 that was about 70%.


These spruikers lol. Micron put out a slide claiming gddr7 results in an average of 30% higher frames than gddr6x when a GPU is bandwidth limited. And now it's become "omg rtx5090 crushes amd, world is ending the plague is here! Pray for jebus!"
 
Last edited:

take with pinch of salt but about 40-50% performance gain over 4090, which is roughly what most are expecting if assuming a conservative gain. I am hoping similar jump from 3090 to 4090 that was about 70%.
I think we’ll see 35% tops in raster, same node but new memory.

Turing went from GDDR5X > 6 on a slightly better node and that’s what we got on a 2080ti, that was also upping the die size from 471 to 775mm2
 
I think we’ll see 35% tops in raster, same node but new memory.

Turing went from GDDR5X > 6 on a slightly better node and that’s what we got on a 2080ti, that was also upping the die size from 471 to 775mm2
if its 35% I wont bother upgrade and wait for their next refresh unless it has that 32G VRAM. Assume I will get £900-1000 for my 4090 when they are released and then if these are even more expensive lets say £2500, no way I am paying extra £1500 for just 35% more performance not worth the price to performance ratio.
 
Last edited:
I raise you a 2.5kw & a 2.8kw..


Only Platinum, not Titanium?
According to the specs, at full load the difference is around 4% which at 2,000W is over 160W of extra heat.
On smaller supplies, 4% might not be much extra heat but on these monsters it certainly is.

Aside from temporal rendering techniques, a return to multiple GPUs would also require ThreadRipper and XTX boards etc.


take with pinch of salt but about 40-50% performance gain over 4090, which is roughly what most are expecting if assuming a conservative gain. I am hoping similar jump from 3090 to 4090 that was about 70%.
50% across the board (not cherry picked) would require at least 50% more cores etc. Current AD102 is over 600mm². Hopper DC chips can be up to 800mm² but that would affect yields.
AD102 is on the 5nm class node and rumours where that 3nm class is too pricey for consumer.

Hence hard to see where they can get +50% from. +70% for 4090 was probably too "generous" considering they must have known TSMC's roadmap ages ago.

So I am interested in how they would achieve +50%.
 
Last edited:

take with pinch of salt but about 40-50% performance gain over 4090, which is roughly what most are expecting if assuming a conservative gain. I am hoping similar jump from 3090 to 4090 that was about 70%.

Hope your right. If nothing else just to see imagine the look on mrk's face :p
 
So I am interested in how they would achieve +50%.
Remember the 4090 is cut down to under 90% of the AD102 so hypothetically Nvidia could have released a card 15% faster this generation so if the 5090 uses over 95% of the die then it’s possible to achieve +50% but personally I don’t think Nvidia would want to do this as they’ll want to save the best dies for AI.

I think 30% raster and 50% RT will be the likely outcome with some improvements to DLSS and FG that slots in 2 frames so Nvidia can boast a large improvement.
 
There was a peak period for multi-GPU but the results before and after that weren't great. Mid-generation DX9 games usually worked great as they were just advanced enough for the technology to work well but mostly not so advanced they leaned heavily on graphic effects which broke compatibility with multi GPU.

I never had much problem with microstutter personally though I did do a fair bit of tuning with various tools like nvinspector and whatever the name of the one which came before that which I forget now.

I remember having a case with 2 ridiculous 380mm fans so as to tame the heat side though hah (Xclio A380). https://www.techpowerup.com/review/apluscase-twinengine/5.html

olkMr7F.jpg


I zip tied 2 fans together to get some air between my 980 ti strix cards, worked pretty well :D
 
"Various sources closely associated with Nvidia reported to Tweakers, during the Computex trade show, that the new generation of graphics cards is currently scheduled to go on sale only at the beginning of 2025. The announcement might still be made in late 2024, but this is not yet certain. In any case, there will be no significant volumes available for sale for the start of the new calendar year." - Tweakers

 
So rdna4 and Battlemage will launch before rtx5000 very interesting

Hopefully amd and Intel can take advantage of it and offer competitive entry and mid range cards; given Nvidia schedule these days is high end first - if 5090 only coming in q1 2025, then 5060/5070 probably only in q2/q3 2025 - so we are still 1 year away from Nvidia having new entry and mid range cards for sale, 1 year for AMD and Intel to do something
 
Last edited:
Getting excited for a new build but waiting for most of the parts to become available! - mainly VR for me, so hoping 5080 will be a nice upgrade from my current 3080ti and keep me set for a few years...
 
Back
Top Bottom