• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
It looks like AMD are going the right route with these.

They're so constrained with 7nm DUV fab capacity, and the 3950X and TR already significantly delayed & 3900X volume still very thin, that it would have been mad to launch more mid and large size dies on it.

7nm EUV they shouldn't be so constrained by wafer supply.
 
- Variable rate shading
- More efficient mixed precision compute
- Smarter & faster caches
- Persistent computing & higher instructions per clock
- More advanced voltage regulation for higher clock speeds among a host of other improvements
- Hardware support for at least some ray-tracing

Expected release time window - June 2020, expected public preview CES 2020.

"AMD’s next-generation GPUs are expected to arrive around June 2020 packing better voltage regulation, variable-rate shading, faster caches, and more. The company could provide a glimpse during CES 2020 in January."
https://www.digitaltrends.com/news/amd-navi-23-supports-ray-tracing-rumor/
 
VRS - good, only "some" RT sounds concerning, and no mention of more memory bandwidth is disappointing.

I need them to go 16 GB HBM again.

That was already discussed. It means the hardware rayvtracing only supports specific object illumination, while shadows and reflections are not supported. To be fair that is the area where ray tracing shows the biggest visual differences but yeah when amd says rayvtracing it will need several asterisks next to it
 
Absolutely.

Whilst they have been reasonable cards they have been far too expensive to make mainly due to the HBM on the card. I can't remember the exact cost but iirc it was something like ~$150 for the 8GB HBM on it's own! The HBM on the Vega cards made them too expensive to make back the cost of development of the cards, for all but the server cards, hence the switch to GDDR6.

Yes they are used in Stadia, how is that going? The only reason the Vega7 was released was that AMD had ~20k cards that they couldn't sell in the server market, so they rebranded them to V7's.
 
I was talking gaming, and it's obvious the Stadia thing was just a dumping of Vega stock done at rock-bottom prices to (finally) clear it for a clean slate.

And where do you get the information that the gaming cards were sold for a loss and that they could've done differently? Again, keep in mind the wider context. AMD didn't have the financials to do a million designs, some for gaming and some for HPC, so you can't compare with what Nvidia did, because they're different companies with different budgets and in different circumstances. So to imply that AMD lost money on those GPUs because of their choice of HBM, is to ignore the fact that they couldn't have done it WITHOUT HBM.

But you know all that's irrelevant anyway because at the end of the day, due to mining alone, your claim that they lost money on their gaming HBM cards is false. Regardless of how you feel about it. And again, all very easily verifiable in their financial statements.
 
And where do you get the information that the gaming cards were sold for a loss and that they could've done differently?

EoL models usually mean this, and tbh I don't 'feel' anything about it just speculating. I'm glad AMD didn't lose with their HBM experiment overall, they were pretty lucky with the mining craze though eh!
 
There are lower-cost variants of HBM coming. HBM, HBM2, HBM2E, HBM3, AMD has plenty of choice.

Samsung is positioning HBM2E for the next-gen datacenter running HPC, AI/ML, and graphics workloads. By using four HBM2E stacks with a processor that has a 4096-bit memory interface, such as a GPU or FPGA, developers can get 64 GB of memory with a 1.64 TB/s peak bandwidth—something especially needed in analytics, AI, and ML.

With the 1,024-bit data bus, HBM2E runs very wide, but not very fast. Two gigabits of throughput is DDR3 speeds, notes Frank Ferro, senior director of product management at Rambus. “By going wide and slow you keep the power and design complexity down on the ASIC side. Wide and slow means you don’t have to worry about signal integrity. They stack the DRAM in a 3D configuration, so it has a very small footprint,” he said.
https://semiengineering.com/hbm2e-the-e-stands-for-evolutionary/

1.64 TB/s is crazy fast, it is as fast as L2 cache on something like the Ryzen 9 3900X but instead of being some MBs, it is measured in 64 GB :eek:
 
Last edited:
Navi 21 is coming! 505 mm^2!

Big-Navi-21.png

https://www.ptt.cc/bbs/PC_Shopping/M.1573129117.A.216.html
 
Status
Not open for further replies.
Back
Top Bottom