• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Caporegime
Joined
18 Oct 2002
Posts
32,617
It is like people complaining because turning on hardware physics reduces framerate...



Sure you don't "need" RT cores to do ray tracing but something along those lines is the only way to accomplish it as things stand - doing it on general compute cores is around 6 times slower and the approach that Crytek have used while admirable and would have been a big thing even 3-5 years ago (when something even close to what they are doing with RTX in realtime was a complete fantasy) is ultimately a dead end involving a lot of special case optimisation compared to "pure" ray tracing techniques.

The actual API support for it in DXR and Vulkan's RT extensions don't care what hardware is underneath - you have bunch of functions an application developer can invoke with their input data and the API basically tells the drivers go away and get me results for this without doing anything that locks it to RTX hardware - it can even be run on Pascal's shaders as demonstrated with Quake 2 but they just aren't capable of the performance needed. Nothing stopping AMD doing similar...

Not aimed at you but I'm getting a bit bored with the same tedious negative responses that are trotted out by people because they either don't understand what is going on and/or because AMD can't do it yet - when you actually notice the techniques in action in Quake 2 RTX and understand how that can be applied to more modern applications it is almost mind blowing that we are pretty much there now not still waiting for another 10 years into the future.



From what I can make out although Tensor cores could be used and are part of the solution in OptiX it seems most applications are using a variation of spatiotemporal variance guided filtering that runs on the compute shaders for denoising - while it could be accelerated on Tensor cores significantly it apparently results in an overall contention for resources on the GPU which needs to be hand tuned to avoid and potentially being trained for the task to get best results which developers don't want to spend time on. Although there was talk of using the Tensor cores to optimise some parts of the BVH process (I assume by using machine learning techniques) again from what I can see there is no such functionality currently active in any of the games I have access to the source of.



The new wolfenstien uses a mixture of Tensor cores and CUDA for the de-noising in some in of hybrid of DLSS and spatiotemporal filtering.


One thing people tend to ignore is that the Tensor cores are used for fast FP16 packed math operations. The non-RTX uring cards end up with additional FP16 CUDA cores. Using Tensor cores has some advantages and disadvantages.


The other thing is both RTX + Tensor cores combined add up to about 8-9% of the entire Turing GPU die. Their actual transitor cost is pretty minimal. All the talk of big Turing dies has very little to do with RTX or tensor cores. This also means that significant RTX performance can be found by dedicating mroe transistor budget to it on 7nm. RTX is currently about 3-4% of the die area, one could happily expand that to 8% without adding significant cost. The RTX cores are simple ray intersection test accelerators and have a very low complexity. PowerVR was doing this like 2 decades ago.
The complex part of RTX is in the dynamic BVH, which is all done on CUDA int32 cores.
 
Soldato
Joined
9 Nov 2009
Posts
24,820
Location
Planet Earth
There is some more information about AMD Arcturus:
https://www.techpowerup.com/263743/...i100-arcturus-hits-the-radar-we-have-its-bios

Both Samsung (KHA884901X) and Hynix memory (H5VR64ESA8H) is supported, which is an important capability for AMD's supply chain. From the ID string "MI100 D34303 A1 XL 200W 32GB 1000m" we can derive that the TDP limit is set to a surprisingly low 200 W, especially considering this is a 128 CU / 8,192 shader count design. Vega 64 and Radeon Instinct MI60 for comparison have around 300 W power budget with 4,096 shaders, 5700 XT has 225 W with 2560 shaders, so either AMD achieved some monumental efficiency improvements with Arcturus or the whole design is intentionally running constrained, so that AMD doesn't reveal their hand to these partners, doing early testing of the card.

It looks like a successor to AMD Vega.

So it looks like AMD is going for a very large die for its new compute card and is confident in doing so now,so a large AMD gaming GPU is probably not a pipe dream.
 
Last edited:
Soldato
Joined
26 Sep 2010
Posts
7,152
Location
Stoke-on-Trent
Sounds like this Arcturus card is operating within its designed parameters, rather than some miraculous efficiency gains (of which do exist in Renoir Vega and going into RDNA 2). Don't forget the 5700 series are overclocked to hell and back, which ruins their efficiency, so it's probably not a fair point of reference. Also HBM takes the total board power down compared to GDDR6, so there's another source of reduction. But isn't this Vega anyway, rather than RDNA? Vega has had a serious tune up in the Renoir APUs.

Could be interesting though.
 
Soldato
Joined
6 Aug 2009
Posts
7,071
Sorry DP but he is correct. It's the Freesync thing all over again but the other way round this time, Nvidia want RTX to be the standard way of talking about ray tracing but it is just their branding for it. As it stands we have no idea how AMD will brand their version.

The should call it ART. AMD Ray Tracing ;)
 

TNA

TNA

Caporegime
Joined
13 Mar 2008
Posts
27,483
Location
Greater London
AMD finnaly might have the fastest GPU crown again, but what's the point when you have dog **** drivers.

The driver issue is only of recent. Shame really as before 5000 series as I recall they had no major problems. They had to go mess it up, now people are talking about their drivers being rubbish again. Will be a costly mistake.
 
Associate
Joined
29 Aug 2013
Posts
1,176
The driver issue is only of recent. Shame really as before 5000 series as I recall they had no major problems. They had to go mess it up, now people are talking about their drivers being rubbish again. Will be a costly mistake.

Yep its used to be an old meme, but now its back in full force and people will justify spending more on Nvidia over AMD cards. They brought it on themselves though the black screen bug goes all the way back to polaris.
 
Soldato
Joined
25 Sep 2009
Posts
9,626
Location
Billericay, UK
Now AMD are making money they can afford to hire a few staff for their driver team, really though 6 months on they shouldn't have any excuses for the issues reported with the new Navi cards let alone the older Vega cards.
 
Soldato
Joined
3 Oct 2013
Posts
3,622
Ironically the OSS driver has been faultless for the 64 and VII, 290x too.

Having said that there are issues with the 57xx from what I've seen
 
Status
Not open for further replies.
Back
Top Bottom