• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
AMD do seem to love high-end HBM gaming cards so there may be as you say a "Halo" card produced.

Not so much love HBM, it's born out of necessity. GCN, as an architecture was hamstrung in it's gaming performance by its memory bandwidth. GDDR VRAM being unable to saturate GCN, HBM was the only option.

I believe AMD has resolved the memory issues in RDNA, so HBM is not required. So I do not expect we'll see HBM on desktop gaming GPUs from either company for several years.

And so, the only cards with HBM will be compute cards where it makes sense because with compete cards price is almost irrelevant - a laboratory using a GPU for DNA sequencing isn't going to care so much about price but instead raw performance, so adding HBM to get every little bit out of it is perfectly fine.
 
Watched overvolted yesterday, they have slides of AMD's MI100 data centre GPU, it has 120 CU's, that's 7,680 Shaders, 3 times as many as the 5700XT. This will not become a Graphics GPU but it shows AMD are now willing to push way past 64 CU's.... if anyone thinks 80 CU's is too many for AMD? Think again :)

https://youtu.be/WQo32d_rmOE?t=1449
 
Radeon VII
Don't forget though that Radeon VII was not a gaming card by design, it was a repurposed Instinct MI50 package. By all accounts there was a Vega 20-based gaming card in the works, but it got shelved for being too expensive. Looks like the PCB and cooler design got dragged out of the archive for how quickly Radeon VII got turned around. It was a PR stunt, after all.

As well as what Grim5 said about memory bandwidth, HBM is also used to keep total board power down. 16GB GDDR6 could be quite hungry, and if the Big Navi cards top over 400W (for example) with a GDDR6 implementation, switching over to HBM would bring that board power back down.
 
That article is from June and refers to the spec leaks, not those bar charts. That's not to say the bar charts aren't fake too of course :p

The bar charts are also in the article. On my phone, I just had to hit the left or right arrow next to the picture of the hybrid GPU to see the charts.
 
Not so much love HBM, it's born out of necessity. GCN, as an architecture was hamstrung in it's gaming performance by its memory bandwidth. GDDR VRAM being unable to saturate GCN, HBM was the only option.

I believe AMD has resolved the memory issues in RDNA, so HBM is not required. So I do not expect we'll see HBM on desktop gaming GPUs from either company for several years.

And so, the only cards with HBM will be compute cards where it makes sense because with compete cards price is almost irrelevant - a laboratory using a GPU for DNA sequencing isn't going to care so much about price but instead raw performance, so adding HBM to get every little bit out of it is perfectly fine.
Yeah, as I previously said I'm surprised about this rumour. I mentioned before I didn't see much upside to producing one.
 
Don't forget though that Radeon VII was not a gaming card by design, it was a repurposed Instinct MI50 package. By all accounts there was a Vega 20-based gaming card in the works, but it got shelved for being too expensive. Looks like the PCB and cooler design got dragged out of the archive for how quickly Radeon VII got turned around. It was a PR stunt, after all.

As well as what Grim5 said about memory bandwidth, HBM is also used to keep total board power down. 16GB GDDR6 could be quite hungry, and if the Big Navi cards top over 400W (for example) with a GDDR6 implementation, switching over to HBM would bring that board power back down.
And Vega 64 was a dual purpose gaming / pro card. Don't think there'll be one this time but you never know.
 
Not so much love HBM, it's born out of necessity. GCN, as an architecture was hamstrung in it's gaming performance by its memory bandwidth. GDDR VRAM being unable to saturate GCN, HBM was the only option.

I believe AMD has resolved the memory issues in RDNA, so HBM is not required. So I do not expect we'll see HBM on desktop gaming GPUs from either company for several years.

And so, the only cards with HBM will be compute cards where it makes sense because with compete cards price is almost irrelevant - a laboratory using a GPU for DNA sequencing isn't going to care so much about price but instead raw performance, so adding HBM to get every little bit out of it is perfectly fine.
NVIDIA uses HBM on most of its high end enterprise hardware, its a price thing. GDDR is cheaper so NVIDIA makes more $$$ using that. If the production cost of HBM can be brought inline with GDDR, most cards will use it as it reduces the PCB costs and uses less power to provide the same or more bandwidth, all GPU's like more bandwidth.
 
Watched overvolted yesterday, they have slides of AMD's MI100 data centre GPU, it has 120 CU's, that's 7,680 Shaders, 3 times as many as the 5700XT. This will not become a Graphics GPU but it shows AMD are now willing to push way past 64 CU's.... if anyone thinks 80 CU's is too many for AMD? Think again :)
I think 80 CUs is very credible and probably correct.

I am sceptical about the +50% performance per watt RDNA2 increase above RDNA that's mentioned on here a lot which is taken as twice (or more) the performance of the 5700XT.

I've also seen the potential performance increase refered to as "up to +50% performance per watt" and "aiming for +50% performance per watt" which could vastly differ from 50% at the end of the day.

Has AMD officially confirmed that they've achieved "+50% performance per watt"?
 
And Vega 64 was a dual purpose gaming / pro card. Don't think there'll be one this time but you never know.
AMD didn't have a choice back then because they were totally skint, so R&D money went towards something that could be used in multiple market sectors. Apparently there was some kind of "Big Polaris" on the table at the time, but that would've been just a gaming product so the decision was made to proceed with Vega for data centre, compute and workstation, and it could also game as well.

AMD aren't so skint these days so they can properly diversify, hence RDNA gaming tech for gaming and CDNA compute tech for compute. There's no need to offer a general purpose product any more.

This does not discount HBM on gaming cards though, it still has a valid use case, it's just a very niche use case.
 
Watched overvolted yesterday, they have slides of AMD's MI100 data centre GPU, it has 120 CU's, that's 7,680 Shaders, 3 times as many as the 5700XT. This will not become a Graphics GPU but it shows AMD are now willing to push way past 64 CU's.... if anyone thinks 80 CU's is too many for AMD? Think again :)

https://youtu.be/WQo32d_rmOE?t=1449
Well, it was rumored to be more then 80 for quite sometime now.

Rumor has it they could go as high as 96cu for gaming.
 
Status
Not open for further replies.
Back
Top Bottom