• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ray Tracing on an AMD Graphics Card beats nvidia

Associate
Joined
24 Nov 2010
Posts
2,314
@pmc25 AMD are not doing anything with Ray Tracing, that's down to game / engine developers, nothing to do with the GPU architecture and again you don't need RT cores to do Anything like in Metro Exodus, TR, BF5..... nVidia use Turing for machiene learning, they are also selling those GPU's to retail as gaming GPU's but to make sure you buy them they are almost claiming Ray Tracing as their own and needing these GPU's to run it, much like they did with PhysX.

Modern GPU's have more than enough compute power to Ray Trace in games, its just a lot of work for game / engine developers to implement, Cryengine are one of few who have, but i don't think they are alone.

I think you missed the entire point of what I was saying. The only dedicated hardware for RT that hasn't proved to be useless was Imagination Technologies' tech in terms of efficiency, but AFAIK they never proceeded to more powerful accelerators, so it was effectively pointless not long after release.

But the RTX stuff is in NVIDIA's consumer GPUs *ostensibly* for RT and DLSS, both of which would be faster without it, and NOT machine learning etc. It's a total bust from every possible angle except marketing - which a bunch of people initially believed was 'revolutionary', including many on these boards, despite immediate evidence to the contrary.

Also, unless you like low frame rates and / or taking screengrabs with all the eye candy on, I'd strongly dispute that "Modern GPU's have more than enough compute power to Ray Trace in games" in a way that actually increases image quality in non static or pre-rendered scenes. Not sure about other people's eyes, but smoothness, resolution and shading trump lighting in terms of primacy wrt image quality.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,148
RTX is really inefficient both in absolute terms and cost / size of silicon. A 754mm Turing GPU minus RTX would be immensely faster using DX or Vulkan RT APIs with its compute based shaders than the 2080Ti could ever be using RTX.

General compute in no substitute for dedicated ray tracing hardware which is generally computing vast amounts of relatively simple maths.
 
Caporegime
Joined
17 Mar 2012
Posts
47,636
Location
ARC-L1, Stanton System
I think you missed the entire point of what I was saying. The only dedicated hardware for RT that hasn't proved to be useless was Imagination Technologies' tech in terms of efficiency, but AFAIK they never proceeded to more powerful accelerators, so it was effectively pointless not long after release.

But the RTX stuff is in NVIDIA's consumer GPUs *ostensibly* for RT and DLSS, both of which would be faster without it, and NOT machine learning etc. It's a total bust from every possible angle except marketing - which a bunch of people initially believed was 'revolutionary', including many on these boards, despite immediate evidence to the contrary.

Also, unless you like low frame rates and / or taking screengrabs with all the eye candy on, I'd strongly dispute that "Modern GPU's have more than enough compute power to Ray Trace in games" in a way that actually increases image quality in non static or pre-rendered scenes. Not sure about other people's eyes, but smoothness, resolution and shading trump lighting in terms of primacy wrt image quality.

Also, unless you like low frame rates and / or taking screengrabs with all the eye candy on, I'd strongly dispute that "Modern GPU's have more than enough compute power to Ray Trace in games" in a way that actually increases image quality in non static or pre-rendered scenes. Not sure about other people's eyes, but smoothness, resolution and shading trump lighting in terms of primacy wrt image quality.

This is nVidia's marketing, their reasoning for selling you large expensive GPU's, i disagree with it entirely.

Turing are workstation GPU's marketed as needing those RT cores to do Ray Tracing at reasonable performance in games, this is simply false, they don't need them, nVidia are simply insuring 'their' code of Ray Tracing works badly without RT cores, they did the same thing with PhysX, Hairworks, Good Rays.... the same technology was available in other API's and ran flawlessly 'better even' on any hardware, Bullet, Havoc.... RT cores are the new Cuda Cores, you need them for XXXXX and only nVidia have them, nVidia fakery.
 
Last edited:

HRL

HRL

Soldato
Joined
22 Nov 2005
Posts
3,028
Location
Devon
I think NVIDIA have much more to fear from Intel's entry than AMD do. They're both the market leader in terms of units and revenue (excluding consoles), and also the dishonest monopolist, with very dodgy marketing to boot. Firstly, Intel are huge and if they want to can completely drown NVIDIA's marketing efforts. Secondly, no-one does dishonest, absuive monopolist practices better than Intel - hard as NVIDIA might try.

Also, Intel are absolutely bound to throw their hat in with what AMD are doing on Ray Tracing and virtually everything else. That is, general purpose hardware, and with Intel's might, finally a big push for Vulkan - they've heavily distanced themselves from MS on the API side in recent years.

RTX is really inefficient both in absolute terms and cost / size of silicon. A 754mm Turing GPU minus RTX would be immensely faster using DX or Vulkan RT APIs with its compute based shaders than the 2080Ti could ever be using RTX. The whole thing is a joke, and NVIDIA would have known it very early in the design and simulation phase - long before they got any test silicon. However ... marketing and dishonesty, and selling a big price tag - that's how they justified it.

Problem is, with Intel entering, they're not going to be able to pull the wool over so many people's eyes. Because JHH and fellow execs can't stand to lose face, I expect the 2020 NVIDIA cards to also have RTX ... and it will hurt NVIDIA badly.

Intel will face advantages and disadvantages due to their new entry. Clean slate - no legacy, no backwards compatibility, no continuity. They can design purely for performance / efficiency / cost. Their 10nm process is likely to be a significant disadvantage. It might be a tiny, tiny bit more dense than TSMC / Samsung 7nm EUV, but judging by projected Zen 2 clocks (not on EUV) vs projected Intel 10nm clocks, they're unlikely to be able to clock near the competition, all else being equal.

Don’t you mean INTEL? :D
 
Soldato
Joined
25 Nov 2011
Posts
20,639
Location
The KOP
Tech reviewers are completely bloody useless and just take whatever nVidia say as fact, they don't even bother to do any of their own research.

Raytracing in Cryengine is not a tech demo and it is not coming to Cryengine 5.5, its been in Cryengine for years, as far back as 3.4 (Crysis 3) and improved ever since, and 5.5; if he would just spend 2 minute looking he would see that Cryengine 5.5 has been out for about 3 months. https://www.cryengine.com/roadmap


Tim at Hardware Unboxed
https://youtu.be/Wn4MwnJhSvc?t=246

Me messing about with it in Cryengine 3.8 (2016)

Here: https://youtu.be/exW1SJUSr90?t=29

And here: https://youtu.be/exW1SJUSr90?t=120

Hunt Showdown (Crytek, Cryengine 5.2)

Video...

https://youtu.be/mU7HWCVASH0?t=181

QkrshcG.jpg.png

Cwbc6jD.jpg.png

Them shadows are not real time ray tracing Humbug, They are screen space reflections. I do not disagree with you Cryengine has had some form of real time illumination, but its never been used for real time shadows before until that demo!

You can see them shadows in the video un-render has the players view moves out of view. Look bottom of the screen, they do look really good though.
 
Caporegime
Joined
17 Mar 2012
Posts
47,636
Location
ARC-L1, Stanton System
Them shadows are not real time ray tracing Humbug, They are screen space reflections. I do not disagree with you Cryengine has had some form of real time illumination, but its never been used for real time shadows before until that demo!

You can see them shadows in the video un-render has the players view moves out of view. Look bottom of the screen, they do look really good though.

That game isn't using GI, it is performance costly for lower end hardware and Consoles but the real time reflections not so much, believe it or not its far less Ray Intensive, those are why i posted the screen shots.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,148
Hardware is less efficient for this kind of thing and a lot less flexible.

Eh? RT is extremely threadable and posible to do large amounts in parrallel which makes it a poor match for current CPUs and is penalised on general compute hardware.
 
Soldato
Joined
28 Sep 2014
Posts
3,437
Location
Scotland
Total Illumination in Cryengine 3.8, a 3 year old engine, According to Tim at Hardware Unboxed Total Illumination is coming to Cryengine 5.5..... Like For ##### Sake!!!!

Docs... https://docs.cryengine.com/pages/viewpage.action?pageId=25535599

It in action, i just recorded this in the Sandbox Editor.

Also notice the Ammo Box in the foreground take on the colour of the grass behind it, what total Illumination does is fire off thousands of rays for every frame and use those to calculate indirect lighting, with it off its standard Cube Map overlaid lighting, with it on its true ray traced illumination based off environmental light sources.

Ray Tracing in games is nothing new and you don't need expensive specialized hardware to run it, all nVidia are doing is making it so they can sell you expensive hardware by claiming you need RT cores to run it at a reasonable performance, you don't its BS!

Most modern hardware have more than enough Floating Point Flops to run the calculations just fine, we are not in 2004.

1440P when Youtube is done encoding it.


Yeah, you can email Tim at Hardware Unboxed to showed him Total Illumination was been used since Cry Engine 3.8.3 released on 26th August, 2015! Email address is on Hardware Unboxed youtube about page.

Cry Engine 3.8.3 used Total Illumination:

GI settings are located in: RollupBar -> Environment -> Total illumination

exMK9X3.jpg

https://docs.cryengine.com/display/SDKDOC2/Voxel-Based+Global+Illumination

Cry Engine 5.5 used Total Illumination version 2:

The Global Illumination settings are located in Tools -> Level Editor -> Level Settings -> Total illumination

YHImJI3.jpg

https://docs.cryengine.com/pages/viewpage.action?pageId=25535599
 
Soldato
Joined
28 Sep 2014
Posts
3,437
Location
Scotland
This is nVidia's marketing, their reasoning for selling you large expensive GPU's, i disagree with it entirely.

Turing are workstation GPU's marketed as needing those RT cores to do Ray Tracing at reasonable performance in games, this is simply false, they don't need them, nVidia are simply insuring 'their' code of Ray Tracing works badly without RT cores, they did the same thing with PhysX, Hairworks, Good Rays.... the same technology was available in other API's and ran flawlessly 'better even' on any hardware, Bullet, Havoc.... RT cores are the new Cuda Cores, you need them for XXXXX and only nVidia have them, nVidia fakery.

I disagreed with you about Nvidia RT cores, it not false, it not fake, it not BS!

It make everybody included you and I wondered how Cry Engine Voxel-Based Global Illumination (SVOGI) or Total Illumination ran flawless on any hardware? I think I now figured out why it ran faster, if you read Cry Engine SVOGI pages throughout and found the performance part:

Performance

The performance depends on which GI settings are used. Usually on Xbox One it takes 4-5 ms of GPU time and on a good PC (GTX 780) it takes 2-3 ms (AO + Sun bounce, no point lights, low-spec mode). The fastest configuration is "AO only" mode; this provides large scale AO at a cost of about 2.5 ms on Xbox One.

https://docs.cryengine.com/display/SDKDOC2/Voxel-Based+Global+Illumination
https://docs.cryengine.com/pages/viewpage.action?pageId=25535599

Voxel-Based Global Illumination (SVOGI) or Total Illumination did not used ray tracing point lights! That explained how it ran flawless on any hardware, if it had used many ray tracing point lights then it will run very slow so it need to use RT cores to run ray tracing point lights faster. Shadow of the Tomb Raider used ray tracing point lights for shadows.

kYWmHFB.png

RTX 2080 ran slower without RT cores so it will really need RT cores to accelerated ray tracing in games and benchmarks.

Ez0GCeB.png

7wAkLkL.png

1 Metro Exodus frame time showed RTX 2080 with RT cores is roughly 3 times faster than RTX 2080 without RT cores and also roughly 5 times faster than GTX 1080 Ti.

If you still think Nvidia RT cores is false, fake and BS then wait until April so you can test driver that will enabled DXR support for Pascal so you will see how slow your GTX 1070 will run DXR ray tracing with Metro Exodus, Shadow of the Tomb Raider, Battlefield V, 3DMark Port Royal, Quake II VKPT, Quake II RTX, Control etc.
 
Soldato
Joined
28 May 2007
Posts
10,069
I disagreed with you about Nvidia RT cores, it not false, it not fake, it not BS!

It make everybody included you and I wondered how Cry Engine Voxel-Based Global Illumination (SVOGI) or Total Illumination ran flawless on any hardware? I think I now figured out why it ran faster, if you read Cry Engine SVOGI pages throughout and found the performance part:



https://docs.cryengine.com/display/SDKDOC2/Voxel-Based+Global+Illumination
https://docs.cryengine.com/pages/viewpage.action?pageId=25535599

Voxel-Based Global Illumination (SVOGI) or Total Illumination did not used ray tracing point lights! That explained how it ran flawless on any hardware, if it had used many ray tracing point lights then it will run very slow so it need to use RT cores to run ray tracing point lights faster. Shadow of the Tomb Raider used ray tracing point lights for shadows.

kYWmHFB.png

RTX 2080 ran slower without RT cores so it will really need RT cores to accelerated ray tracing in games and benchmarks.

Ez0GCeB.png

7wAkLkL.png

1 Metro Exodus frame time showed RTX 2080 with RT cores is roughly 3 times faster than RTX 2080 without RT cores and also roughly 5 times faster than GTX 1080 Ti.

If you still think Nvidia RT cores is false, fake and BS then wait until April so you can test driver that will enabled DXR support for Pascal so you will see how slow your GTX 1070 will run DXR ray tracing with Metro Exodus, Shadow of the Tomb Raider, Battlefield V, 3DMark Port Royal, Quake II VKPT, Quake II RTX, Control etc.

I am not saying RT cores have no benefit but common they really could code anything a different way to get the results they want on different hardware. This ain't proof as Nvidia want you to see it this way. RTX for what's being coded is defititely the fastest way to go about it. Do we need it after seeing the crytek demo on a mid range card and that is the question. They are not the same but could we have the same with lesser hardware. I doubt Nvidia want us to see it that way.
 
Soldato
Joined
27 Feb 2015
Posts
12,621
RTX is still a hybrid solution but capable of a more complex level of actual tracing versus the hacked up software variants that have to use a lot of tricks to work around the really slow bits (i.e. animated objects are still being special cased, etc.).

Doesnt matter as long as it still looks good tho.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,148
Doesnt matter as long as it still looks good tho.

As one of my other posts though getting it to look good and work with proper performance in a wide range of scenarios is a lot harder - with software implementations like this using general compute power there is usually a lot of tricking it up and special casing which involves a lot more workload - a proper RT implementation even a hybrid one when matured will handle a wide range of scenarios without any real attention.
 
Caporegime
Joined
24 Sep 2008
Posts
38,322
Location
Essex innit!
Great video and looks fantastic. Looking at the thread title, I am a bit confused, as after watching, I can't see where AMD beat NVidia? It does say that it will be integrated into Cryengine and make use of the latest hardware.

OP is very misleading and whilst I really want to see Raytracing take off in a big way, I want to see both AMD and NVidia pushing for it (and Intel I guess).
 
Caporegime
Joined
17 Mar 2012
Posts
47,636
Location
ARC-L1, Stanton System
I disagreed with you about Nvidia RT cores, it not false, it not fake, it not BS!

It make everybody included you and I wondered how Cry Engine Voxel-Based Global Illumination (SVOGI) or Total Illumination ran flawless on any hardware? I think I now figured out why it ran faster, if you read Cry Engine SVOGI pages throughout and found the performance part:



https://docs.cryengine.com/display/SDKDOC2/Voxel-Based+Global+Illumination
https://docs.cryengine.com/pages/viewpage.action?pageId=25535599

Voxel-Based Global Illumination (SVOGI) or Total Illumination did not used ray tracing point lights! That explained how it ran flawless on any hardware, if it had used many ray tracing point lights then it will run very slow so it need to use RT cores to run ray tracing point lights faster. Shadow of the Tomb Raider used ray tracing point lights for shadows.

kYWmHFB.png

RTX 2080 ran slower without RT cores so it will really need RT cores to accelerated ray tracing in games and benchmarks.

Ez0GCeB.png

7wAkLkL.png

1 Metro Exodus frame time showed RTX 2080 with RT cores is roughly 3 times faster than RTX 2080 without RT cores and also roughly 5 times faster than GTX 1080 Ti.

If you still think Nvidia RT cores is false, fake and BS then wait until April so you can test driver that will enabled DXR support for Pascal so you will see how slow your GTX 1070 will run DXR ray tracing with Metro Exodus, Shadow of the Tomb Raider, Battlefield V, 3DMark Port Royal, Quake II VKPT, Quake II RTX, Control etc.

From your Cry Docs link

This GI solution is based on voxel ray tracing and provides following effects:

A voxel is the coloured dot that makes up the image, Basically a 3D pixel

Anyway...
I think you misunderstood what i'm saying, those RT cores are used for Ray Tracing acceleration in work stations, but they are unnecessary for gaming and the only reason nVidia used them in games is because they deliberately gimped the code to make it far less efficient, just like they did with PhysX and Game Works.
 
Last edited:
Soldato
Joined
27 Feb 2015
Posts
12,621
Yeah my point is that the gamer doesnt care how the end result is achieved, it may well be a pain in the backside for developers, but the gamer doesnt care. If software based RT looks "almost as good" as hardware based RT in a game on a much cheaper GPU then the gamer will be happy, even tho there will be all sorts of shortcuts been made to get the performance.

Also it seems evident to me that when we see RTX ON/OFF comparisons the OFF is usually gimped to make the RTX look a bigger improvement than it actually is over existing tech.
 
Man of Honour
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
I ran this for fun earlier.

Both cards at stock.
SOTTR maxed out @2160p using Ray Tracing.
Titan RTX (which does have RT cores) V Titan V (which does not have RT cores).

Titan RTX using max Ray Tracing
XFwrX4f.jpg
sDV4yMR.jpg



Titan V using max Ray Tracing
9lYTHc1.jpg
nKGtFeH.jpg



Titan RTX Ray Tracing off
Ei6dt9q.jpg
5mW8SWE.jpg
 
Caporegime
Joined
17 Mar 2012
Posts
47,636
Location
ARC-L1, Stanton System
Yeah my point is that the gamer doesnt care how the end result is achieved, it may well be a pain in the backside for developers, but the gamer doesnt care. If software based RT looks "almost as good" as hardware based RT in a game on a much cheaper GPU then the gamer will be happy, even tho there will be all sorts of shortcuts been made to get the performance.

Also it seems evident to me that when we see RTX ON/OFF comparisons the OFF is usually gimped to make the RTX look a bigger improvement than it actually is over existing tech.


So nVidia have these real big die expensive GPU's, its RT cores provide great Ray Tracing and Machine learning acceleration, no doubt about that, but they also need to be sold to retail and at a high cost, so how do we justify that?

Almost pretend we invented Ray Tracing, gimp the crap out of the code so you need the accelerated performance out of the RT cores and in that way we can market them to the unsuspecting for lots of dosh.
-----

Crytek have been doing it for years and not only is it vendor agnostic but proves you don't need specialized Trace acceleration to run it at a reasonable performance.
 
Caporegime
Joined
24 Sep 2008
Posts
38,322
Location
Essex innit!
So nVidia have these real big die expensive GPU's, its RT cores provide great Ray Tracing and Machine learning acceleration, no doubt about that, but they also need to be sold to retail and at a high cost, so how do we justify that?

Almost pretend we invented Ray Tracing, gimp the crap out of the code so you need the accelerated performance out of the RT cores and in that way we can market them to the unsuspecting for lots of dosh.
-----

Crytek have been doing it for years and not only is it vendor agnostic but proves you don't need specialized Trace acceleration to run it at a reasonable performance.
Just googled and not seen a thing about previous versions of the Cryengine being able to do Raytracing. Even Crytek have made a big thing about how 5.5 is RT enabled and is better suited to the latest hardware.

Maybe I am missing something?
 
Back
Top Bottom