• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

What do gamers actually think about Ray-Tracing?

With the rapid advancement of AI and the growing importance of Tensor cores in Nvidia's hardware I wonder if there's a point in the near future where will do away with RT cores and just replace them with more additional CUDA with RT instruction sets and use AI to produce the additional frames.
 
With the rapid advancement of AI and the growing importance of Tensor cores in Nvidia's hardware I wonder if there's a point in the near future where will do away with RT cores and just replace them with more additional CUDA with RT instruction sets and use AI to produce the additional frames.
Tensor Cores are for mixed precision computing, for example they are capable of Int 8 compute which is used for Machine Learning, AI.

RT cores are different, you can't use Tensor cores for RT so you need them.
 
xGLm1Jp.jpg



RED. Purple emitters and purple reflective bounce, absolutely tiny area, huge area of light bounce.

Blue area. Absolutely gigantic area of green and brighter light than the target sign.

White circle to showcase how far down the green bounces and it's strength.

Light green line to show you the top of the ventilation box is near enough the same height as the surface of his forearm.

Yet the human is a lit up popsical and there is no light on the vent box from the green. Zero smoke interaction.

He has a point, this is not even close to how natural light bounce looks.

I'm not hating on RT, its a great tech, but in this example at least its still hand crafted, faked. Its a showcase in lighting and branded as RTX, but its no less faked lighting than rasterization.

Like everything in Cyberpunk, IMO, not that it looks bad, it doesn't but its just an advert, the whole game is an RTX advert, you would hardly notice real light bounce and that's not a good advert.
 
Last edited:
Tensor Cores are for mixed precision computing, for example they are capable of Int 8 compute which is used for Machine Learning, AI.

RT cores are different, you can't use Tensor cores for RT so you need them.
Correct but that's not quite the point I was making. If you use Tensor to accelerate AI instructions (or in AMD's case WMMA except for there CDNA cards which have dedicated hardware for this sort of work) to enable frame generation you could potentially dispose of RT cores altogether and replace the die space with additional CUDA cores that can do the path tracing work instead (similar to how tessellation is handled). I'm not sure if the general purpose cores are strong enough yet to take over but given were the industry is going AI I can't see any other outcome.
 
Correct but that's not quite the point I was making. If you use Tensor to accelerate AI instructions (or in AMD's case WMMA except for there CDNA cards which have dedicated hardware for this sort of work) to enable frame generation you could potentially dispose of RT cores altogether and replace the die space with additional CUDA cores that can do the path tracing work instead (similar to how tessellation is handled). I'm not sure if the general purpose cores are strong enough yet to take over but given were the industry is going AI I can't see any other outcome.

If that was the case you could also do away with Shaders, Tensor Cores do Math, both RT cores and Shaders do shading, its not the same thing, one is a calculator the other a paintbrush.
 
Last edited:
What you can do... if you want to go down this rout is do what AMD have, AMD do not have 'dedicated RT cores' AMD's shaders have dual Shader plus RT function.

Not all Shaders are used all the time at any given time, AMD switch the Shader not in use at any given time to RT, the up side of that is you do use less die space, the down side is it only works so well as where not too many shaders are in use at any given time, if you need those shaders for shader work then you need to give some of them up to RT.
This is why the more difficult the RT work the more performance relative to shader performance AMD lose, so in Cyberpunk on Medium RT setting a 7900 GRE is just as good as an RTX 4070, on Ultra, or extreme or whatever the name of the highest setting is for RT in Cyberpunk the RTX 4070 pushes ahead, because to run that harder RT the 7900 GRE needs to give up more of its shaders to RT.

At the same time tho if you're getting 53 FPS on the 7900 GRE and 55 FPS on the 4070 at medium RT settings then you're good... if however you're getting 18 FPS at ultra setting on the 7900 GRE and 24 FPS on the 4070 then yes its a lot faster, but its still unplayable.

Now look at how 90% of mainstream tech jurnoes review GPU's with Cyberpunk and why IMO its sus. there seems to be some green involved with it because there is no way they do this crap and think its a good way to go about it.
 
Last edited:
What you can do... if you want to go down this rout is do what AMD have, AMD do not have 'dedicated RT cores' AMD's shaders have dual Shader plus RT function.

Not all Shaders are used all the time at any given time, AMD switch the Shader not in use at any given time to RT, the up side of that is you do use less die space, the down side is it only works so well as where not too many shaders are in use at any given time, if you need those shaders for shader work then you need to give some of them up to RT.
This is why the more difficult the RT work the more performance relative to shader performance AMD lose, so in Cyberpunk on Medium RT setting a 7900 GRE is just as good as an RTX 4070, on Ultra, or extreme or whatever the name of the highest setting is for RT in Cyberpunk the RTX 4070 pushes ahead, because to run that harder RT the 7900 GRE needs to give up more of its shaders to RT.

At the same time tho if you're getting 53 FPS on the 7900 GRE and 55 FPS on the 4070 at medium RT settings then you're good... if however you're getting 18 FPS at ultra setting on the 7900 GRE and 24 FPS on the 4070 then yes its a lot faster, but its still unplayable.

Now look at how 90% of mainstream tech jurnoes review GPU's with Cyberpunk and why IMO its sus. there seems to be some green involved with it because there is no way they do this crap and think its a good way to go about it.
This. You have the usual folks that worship RT on an altar as if it’s the second coming who swear the reviews are all unbiased! :D
 
Last edited:
This. You have the usual folks that worship RT on an altar as if it’s the second coming who swear the reviews are all unbiased! :D

It IS the second coming for graphics rendering tho... it's the realisation of that little lampshade playing ball with it's parent lampshade four decades ago, in real time gaming. That lampshade, and every CGI movie like Toy Story that came after.

It's what people who like graphics have been dreaming about for a very long time. I don't think any who appreciates graphics yet continues to rip on RT gets how important it is.
 
It IS the second coming for graphics rendering tho... it's the realisation of that little lampshade playing ball with it's parent lampshade four decades ago, in real time gaming. That lampshade, and every CGI movie like Toy Story that came after.

It's what people who like graphics have been dreaming about for a very long time. I don't think any who appreciates graphics yet continues to rip on RT gets how important it is.
Pixar uses the renderman render engine which gained the RT algorithm within the last decade (Renderman 19, I think was released around 2018). Toy Story 1-3 were not rendered using ray tracing. Toy Story 4 may not have been rendered using ray tracing as there was a transition period where Pixar still used the old Reyes algorithm due to certain advantages.
 

Luxo junior, back in the day :)

The fun times in tech, funny that Nvidia didn't last with Apple too.... wonders why :cry: , all AMD workstation cards now in Apple desktops and has been for a long time. Forgot even Nvidia was in Apple systems, just remember Microsoft and Sony and other companies running for their lives once they got involved with Nvidia and never to return.

$600 back in 2001, 2024 $1,051.95.

Top card was $1,051.95 back then in 2024 prices, today 4090 FE is $1600 and not even the full chip enabled and really $2000+ for any decent AIB and a top AIB ASUS strix oc in USA right now is $2300 from well known USA hardware retailers..

See how the world has gone mad and you can't blame inflation ? Technology was meant to get cheaper over time which it did but nowhere near what was promised back then and even back then prices were seen to be obscene for a $600 GPU but the difference back then is the prices would come down and even end up in cheap bins at most pc stores back then even before the new generation would be out. I lost count how many times I picked up graphics, sound cards, motherboards and CPUs back then in the cheap bins, cards for 1/3rd or more the retail price, just because they had damaged packaging or returned or even brand new not opened but they wanted to clear them for new stuff.

Now you never see that and they keep them at the same price and do the fake black friday deals and sale deals $50 off if you're lucky... This in UK I mean.
 
Last edited:
I now know the answer from the RT evangelists to life, the universe and everything in a Cult-which-Jobs-build kind of way is:

you are looking [at] it wrong

And anyone who thinks that Cyberpunk2077 looks poor, or is too dark will get crucified.

That CB2007 bar scene with the one guy looked like a bad case of Bloom gone mad. Never liked Bloom since I first came across it in Oblivion (or was it even Morrowind) especially with the NPCs looking like they were self-lit.

For the RT enthusiasts, CB did a performance per Euro thing, and for those who care about RT they provide two charts:
Wff0RZL.png

(They also have the chart for 1080P and 4K, but I though 1440P is a nice middle.)
I rather like the the idea of personalised charts but unfortunately in this article unlike a lot their others they do not allow you to select your game suite.
Personally, I would prefer a value for money chart when expecting to need lost of VRAM but I guess the 8/10/12GB cards are easy enough to ignore - although on the Nvidia side that leaves the 4070 Ti Super as the min which is too steep IMO.
 
One of the big selling points of RT is allegedly how it makes things easier for developers. A game like HZD has amazing visuals even on the PS5 and will improve on PC. There is no RT used in the game but the developers and artists have taken great care to manually design an engine with great shadows, lighting and contrasts etc. This is quite labour intensive and I know this from experience of working as a graphics artist for some games.

So it is obvious a game without RT can look amazing with a lot of work. RT gives more realistic lighting rather than pre baked lighting, but the performance cost for this improvement is massive and beyond the reach of even the greatest GPUs without upscaling. Now I know some people swear by upscaling and that it’s better than native, but in my experience upscaling does have some IQ issues. I can live with them and overall think Native>DLSS>XeSS>FSR in that order. FSR isn’t terrible and at 4K quality is generally very good but does need improvements IMHO.

I’m all for RT and upscaling being required to achieve it does not bother me. The problem developers have is RT is extra work right now as they have to do screen space stuff as well. So it adds cost and time to implement RT. My prediction has always been that RT will evolve and become more mainstream but we are still many years away from it being the normal in any levels that a developer can suddenly decide SSR is no longer needed.

Sorry for the long post, but my point is that the majority don’t have the horsepower to run RT and developers have no incentive to do massive amounts of RT, as it adds to development time and costs. They also alienate the masses and that might impact sales. So right now we are in a cycle of developers doing token levels of RT and most people think “is this it” and turn it off again and declare RT pointless.
 
Last edited:
Back
Top Bottom