• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Cyberpunk 2077 Ultra performance

No 70 vs 39 is pretty bad, and JayZ2Cents found that in places it would drop down to 7 FPS on the 6800XT.
Sadly yes. I will give AMD a pass though, as this is their first time getting RT running and NVidia does have dedicated HW to offload the work but AMD will be there and hopefully sooner rather than later. Just sadly CP2077 looks sweet with RT in motion.
 
We still don't really know how RT will perform on AMD cards with games designed for there RT. Dirt is the only real comparison and the 6800xt takes the same hit as the 3080 does with RT shadows on. Why is the 3080 taking the same hit if it's RT hardware is so much more powerful. Everything else is designed with Nvidia RTX in mind. For me i will wait to see what happens in games that are vendor neutral before saying the AMD cards are crap at RT. Cyberpunk is another RTX showcase so i don't expect much from AMD in this case but you never know as AMD are now working with them. Ps5/Xbox games with RT will all be coded with AMD hardware in mind. So it might not look great atm but i can see things improving in the future.
 
Sadly yes. I will give AMD a pass though, as this is their first time getting RT running and NVidia does have dedicated HW to offload the work but AMD will be there and hopefully sooner rather than later. Just sadly CP2077 looks sweet with RT in motion.

Nvidia and AMD use the same approach with dedicated RT hardware acceleration, they are built into the shaders, its why both = the same RT core as they have CU's. 72 for the 6800XT and 68 for the 3080.
The difference is in how they use Async to connect all that together, they require a different programming approach.
--------------

Any way, I have found this:

RDNA 2 fully supports the latest DXR Tier 1.1 standard, and similar to the Turing RT core, it accelerates the creation of the so-called BVH structures required to accurately map ray traversal and intersections, tested against geometry. In short, in the same way that light 'bounces' in the real world, the hardware acceleration for ray tracing maps traversal and intersection of light at a rate of up to 380 billion intersections per second.

"Without hardware acceleration, this work could have been done in the shaders, but would have consumed over 13 TFLOPs alone," says Andrew Goossen. "For the Series X, this work is offloaded onto dedicated hardware and the shader can continue to run in parallel with full performance. In other words, Series X can effectively tap the equivalent of well over 25 TFLOPs of performance while ray tracing."

It is important to put this into context, however. While workloads can operate at the same time, calculating the BVH structure is only one component of the ray tracing procedure. The standard shaders in the GPU also need to pull their weight, so elements like the lighting calculations are still run on the standard shaders, with the DXR API adding new stages to the GPU pipeline to carry out this task efficiently. So yes, RT is typically associated with a drop in performance and that carries across to the console implementation, but with the benefits of a fixed console design, we should expect to see developers optimise more aggressively and also to innovate. The good news is that Microsoft allows low-level access to the RT acceleration hardware.

""For the Series X, this work is offloaded onto dedicated hardware and the shader can continue to run in parallel with full performance. In other words, Series X can effectively tap the equivalent of well over 25 TFLOPs of performance while ray tracing.""

The Series X has 52 RT cores.

We won't know what RDNA2's true RT performance is until more games are designed around its RT architecture.

https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs
 
The 6800XT has 72 RT cores

Sort of... it has 72 CUs - each are capable of Ray Tracing, but they're not "RT cores," it's a shared pipeline... I suspect we'll see as drivers get more refined and AMD suss out the most efficient scheduling that their RT will improve a lot from now... and then as their DLSS counterpart comes into play, we *should* see the gap close... But, and this is one of the complaints I levy against other people for making the same argument, the future is not now - and now is what matters. For now if you want to use Raytracing then you pretty much have to go with nVidia.
 
No 70 vs 39 is pretty bad, and JayZ2Cents found that in places it would drop down to 7 FPS on the 6800XT.

We've been saying the 6800XT will have ~30%(generous) less RT performance than the 3080 for some time now. What did you expect when they don't have the same level of hardware?
 
It doesn't help it in Dirt. I think there is more to it.
That's an odd one. I can only imagine AMD are using a RT process at a lower resolution then smoothing the edges. It's not a game that RT use stands out in. I think they did well again with shadows in SOTTR? I get the feeling that Nvidia takes a belt and braces approach so far and just RTs the whole area required, hence their 'It just works' approach.
 
Sort of... it has 72 CUs - each are capable of Ray Tracing, but they're not "RT cores," it's a shared pipeline... I suspect we'll see as drivers get more refined and AMD suss out the most efficient scheduling that their RT will improve a lot from now... and then as their DLSS counterpart comes into play, we *should* see the gap close... But, and this is one of the complaints I levy against other people for making the same argument, the future is not now - and now is what matters. For now if you want to use Raytracing then you pretty much have to go with nVidia.
They have 72 RT cores AND 72 Compute units. The only difference is in how the game is instructed to use those cores and one does not work for the other, they way they use those cores Shading vs RT is different at the hardware level, the way they use the shader core and RT core in the same CU at the same time is different, The Asynchronous engine, if a games Ray Tracing is programmed for Nvidia it will use one OR the other instead of Asynchronously on the AMD GPU and Vice Vera.

This is why i distinguish between RTX and DXR.
 
Last edited:
They have 72 RT cores AND 72 Compute units. The only difference is in how the game is instructed to use those cores and one does not work for the other, they way they use those cores Shading vs RT is different at the hardware level, the way they use the shader core and RT core in the same CU at the same time is different, The Asynchronous engine, if a games Ray Tracing is programmed for Nvidia it will use one OR the other instead of Asynchronously on the AMD GPU and Vice Vera.

This is why i distinguish between RTX and DXR.

Are the RT cores embedded in the CU then ? I was under the impression that they didn't have dedicated RT hardware -- but since I bought into nVidia this gen I've only glanced at them and not done a deep dive, maybe I misinterpreted what I'd read about it *shrug*
 
Are the RT cores embedded in the CU then ? I was under the impression that they didn't have dedicated RT hardware -- but since I bought into nVidia this gen I've only glanced at them and not done a deep dive, maybe I misinterpreted what I'd read about it *shrug*

Not in the CU but coupled to it, AMD RT cores are physically inside, its a different approach in layout but they are all dedicated RT cores.

b6l327X.png

GcFc9S3.jpg.png
 
We still don't really know how RT will perform on AMD cards with games designed for there RT. Dirt is the only real comparison and the 6800xt takes the same hit as the 3080 does with RT shadows on. Why is the 3080 taking the same hit if it's RT hardware is so much more powerful. Everything else is designed with Nvidia RTX in mind. For me i will wait to see what happens in games that are vendor neutral before saying the AMD cards are crap at RT. Cyberpunk is another RTX showcase so i don't expect much from AMD in this case but you never know as AMD are now working with them. Ps5/Xbox games with RT will all be coded with AMD hardware in mind. So it might not look great atm but i can see things improving in the future.

That's cause RT speed up is faster the more RT you throw at it. Dirt 5 is extremely limited RT effects that's why the performance hit is very minor on any card
 
Are the RT cores embedded in the CU then ? I was under the impression that they didn't have dedicated RT hardware -- but since I bought into nVidia this gen I've only glanced at them and not done a deep dive, maybe I misinterpreted what I'd read about it *shrug*

It's a hybrid approach. There is one fixed function Ray Tracing unit per CU. It's hardware accelerated. But, it's a repurposed Texture Processor. So that Unit does the Ray tracing near the beginning of the pipeline and switches over to do the texturing at the end of Pipeline.
 
Nvidia and AMD use the same approach with dedicated RT hardware acceleration, they are built into the shaders, its why both = the same RT core as they have CU's. 72 for the 6800XT and 68 for the 3080.
The difference is in how they use Async to connect all that together, they require a different programming approach.
--------------

Any way, I have found this:



""For the Series X, this work is offloaded onto dedicated hardware and the shader can continue to run in parallel with full performance. In other words, Series X can effectively tap the equivalent of well over 25 TFLOPs of performance while ray tracing.""

The Series X has 52 RT cores.

We won't know what RDNA2's true RT performance is until more games are designed around its RT architecture.

https://www.eurogamer.net/articles/digitalfoundry-2020-inside-xbox-series-x-full-specs
Ahhh my bad and thought they were doing it via software and normal cores :) Thanks for the heads up.
 
That's an odd one. I can only imagine AMD are using a RT process at a lower resolution then smoothing the edges. It's not a game that RT use stands out in. I think they did well again with shadows in SOTTR? I get the feeling that Nvidia takes a belt and braces approach so far and just RTs the whole area required, hence their 'It just works' approach.
There is nothing odd about it.
-You don't have an answer to it
-He refuted your claim.


Is anyone getting any joy with this game with a 2080 or better to wait for 3080 upgrade?
Hold out for a while.
 
Back
Top Bottom