• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The Raytracing thread

DXR has a software fall back path, but ti is up to the IHV to develop that in drivers. The end result is bascialy ging to be too slow on any card that doesn;t have hardware accelerated RT, so little point in DICE wasting their time.
 
The demo on Youtube looks distracting. And why is there wood and eyeball reflections? EYEBALL REFLECTIONS? WIND YOUR NECK IN IT IS AN ONLINE SHOOTER OMG!!!

You will be watching for that enemy to shoot i barely notice any reflection or hyper shadows nevermind eyeballs but out of touch (We dont play our own games) snake oil capitalists will try thier damm best to tell you Ray Tracing matters.

Anyways i am off to bed because the sheer stupidity of people hurts my brain in these situations. I guess i know now what they mean by one every minute because even if i boycott this people will still buy into it.

Chill your tattas, dude. It was a demo. Eyeball reflections are demonstrating what can be done with Raytracing at its most extreme. I sincerely doubt anybody was genuinely going to include eyeball reflections in the actual game. In-game cut scenes I can entirely see eyeball reflections happening, because there's nothing quite like the dramatic trope of the horrors of devastation and warfare reflecting in the emotionally-tortured eyes of the observer.

1st generation RT is going to be a laughable fad given how much you are expected to pay to cripple your game's performance, but 1st generations of groundbreaking technology always are a bit of a joke. It's hardly snake oil though.
 
..I thought it was clear it would be exclusive to RT cards as you cannot run it on a none ray tracing card as useable speeds.

A good comparison would be early Direct3D, you didn't "need" a 3D accelerator to run D3D games (only OpenGL ones), you "could" run a D3D game on a 4MB S3 Virge if you wanted, however if you did then the transparency around objects would show as black boxes and it would generally look awful (worse than software rendering).

So, realtime raytracing has been around for years already in the industrial and content creation sectors. Usually termed "Interactive Raytracing" and all existing softwares use either cuda or opencl/metal. Certainly they are not rendering frames in 16ms on consumer level cards, but they are also casting many, many more rays compared to the hybrid raster/rt method that we're seeing here.

I want to know what's different about RTX or DXR that it now requires special hardware and can't just run on existing compute cores. Seems that early performance indicators of running RTX on turing's tensor cores is hardly a massive departure from what we already had in existing compute accelerated raytracers.

It can't be the denoising, as AMD already have that running on opencl too.
 
I want to know what's different about RTX or DXR that it now requires special hardware and can't just run on existing compute cores.
I explained that in the post you quoted. But yeah, there is a difference between "can't just run on existing hardware" and "is disabled in game on existing hardware because the result would be unusable". It makes sense that DICE would disable the option if the result wouldn't be viable as not doing so would just result in complaints.
 
So, realtime raytracing has been around for years already in the industrial and content creation sectors. Usually termed "Interactive Raytracing" and all existing softwares use either cuda or opencl/metal. Certainly they are not rendering frames in 16ms on consumer level cards, but they are also casting many, many more rays compared to the hybrid raster/rt method that we're seeing here.

I want to know what's different about RTX or DXR that it now requires special hardware and can't just run on existing compute cores. Seems that early performance indicators of running RTX on turing's tensor cores is hardly a massive departure from what we already had in existing compute accelerated raytracers.

It can't be the denoising, as AMD already have that running on opencl too.


DXR can run on the compute cores, that is the software support. It is up to AMD to enable that in drivers. But it is much slower than using the dedicate RTX hardware, which is around 8x faster.
 
Won't ray tracing be pointless/laggy when VR kicks off more and takes more of the market share

VR is a long way away from taking a big chunk of the PC gaming market. If you look at twitch for example nearly all of the views go to titles of a competitive nature and you can't do well in any of those titles in VR.

RT will be an excellent addition to VR in years to come though.
 
So, realtime raytracing has been around for years already in the industrial and content creation sectors. Usually termed "Interactive Raytracing" and all existing softwares use either cuda or opencl/metal. Certainly they are not rendering frames in 16ms on consumer level cards, but they are also casting many, many more rays compared to the hybrid raster/rt method that we're seeing here.

I want to know what's different about RTX or DXR that it now requires special hardware and can't just run on existing compute cores. Seems that early performance indicators of running RTX on turing's tensor cores is hardly a massive departure from what we already had in existing compute accelerated raytracers.

It can't be the denoising, as AMD already have that running on opencl too.
A few things. First games don't need as many rays per pixels but they do need more FPS. While you can get away with Interactive Raytracing at 1 FPS or less in the industrial that's useless for games that need 30 to 60+ FPS. It doesn't require special hardware to run its just the special hardware is massively faster at games and faster at the industrial "Interactive Raytracing". This special hardware has not only made RT practical for games but it massively speeds up industrial ray tracing. Its not that it cannot run on existing compute cores its just that those cores are so slow some devs have decided to not allow it via software. In some cases you can choose to run the RTX games/app on older existing compute cores but its not a good experience due to the slow speeds.
 
A few things. First games don't need as many rays per pixels but they do need more FPS. While you can get away with Interactive Raytracing at 1 FPS or less in the industrial that's useless for games that need 30 to 60+ FPS. It doesn't require special hardware to run its just the special hardware is massively faster at games and faster at the industrial "Interactive Raytracing". This special hardware has not only made RT practical for games but it massively speeds up industrial ray tracing. Its not that it cannot run on existing compute cores its just that those cores are so slow some devs have decided to not allow it via software. In some cases you can choose to run the RTX games/app on older existing compute cores but its not a good experience due to the slow speeds.
What do you think of Nvidia's efforts here with ray tracing, Pottsey; are you impressed? Do you think there is a chance we could get 60+ FPS at 1440P with optimisation? I'd be interested to know how this compares to what Imagination were doing.
 
What do you think of Nvidia's efforts here with ray tracing, Pottsey; are you impressed? Do you think there is a chance we could get 60+ FPS at 1440P with optimisation? I'd be interested to know how this compares to what Imagination were doing.
Yes it’s impressive and yes in theory we could get 60fps at 1440p. The drivers are not optimised; the software games are a quick patch, in some cases less than 2 weeks of worth of coding without being optimised. The devs have also not had chance to learn how to code best for RT yet and no one has built a game from the ground up with RT in mind. So far it’s all been bolt ons. There is for sure room for speed improvements. It all comes down to how the devs use and implant RT. I still expect after all the above there will still be some slow cases.

Not next generation but I expect at some point RT will overtake current rendering methods. If you add extra GPU cores you get a larger boost to RT then you do to the current method. So hopefully each new generation will see large boost in RT performance.

As for Imagination I believe there version is far more efficient but realistically they are not going to enter the desktop gaming market. Unless something crazy happens like AMD license IMG RT tech which seems very doubtful, I just cannot see how they could enter. We are more likely to see Imagination in some sort of VR headset or standalone device.
 
A few things. First games don't need as many rays per pixels but they do need more FPS. While you can get away with Interactive Raytracing at 1 FPS or less in the industrial that's useless for games that need 30 to 60+ FPS. It doesn't require special hardware to run its just the special hardware is massively faster at games and faster at the industrial "Interactive Raytracing". This special hardware has not only made RT practical for games but it massively speeds up industrial ray tracing. Its not that it cannot run on existing compute cores its just that those cores are so slow some devs have decided to not allow it via software. In some cases you can choose to run the RTX games/app on older existing compute cores but its not a good experience due to the slow speeds.

Yes, but, as I sort of hinted at in my last post, the existing interactive raytracers do not use hybrid raster techniques casting rays only for reflections/specular/shadows or whatever rtx is doing. They render the entire scene using raytracing. This is orders of magnitude more compute intensive than what we've seen so far from the rtx implentations, and based on that I am still not entirely sold on the idea that this new method using tensor cores is really, truly a whole lot faster than existing cuda/opencl implementations.
 
Yes, but, as I sort of hinted at in my last post, the existing interactive raytracers do not use hybrid raster techniques casting rays only for reflections/specular/shadows or whatever rtx is doing. They render the entire scene using raytracing. This is orders of magnitude more compute intensive than what we've seen so far from the rtx implentations, and based on that I am still not entirely sold on the idea that this new method using tensor cores is really, truly a whole lot faster than existing cuda/opencl implementations.

RTX isn't using tensor cores, DLSS uses tensor, RTRT uses RT cores.
 
RTX isn't using tensor cores, DLSS uses tensor, RTRT uses RT cores.
Yeah, a little voice in the back of my head was telling me I was using the wrong terminology! My bad.

Still, my point stands - when you consider it objectively against what already exists it doesn't seem all that impressive.

I'm just not seeing how this is particularly groundbreaking?
 
Yeah, a little voice in the back of my head was telling me I was using the wrong terminology! My bad.

Still, my point stands - when you consider it objectively against what already exists it doesn't seem all that impressive.

I'm just not seeing how this is particularly groundbreaking?

the info we have is there in the initial presentation - running the same raytracing code on pascal compute cores takes on the order of 300ms, on a 4xVolta sytem it was 55ms, on Turing its 45ms
now obviously thats not a realistic level for gaming as it needs to be sub 16ms, but the scene referenced was basically movie quality so they were targeting 25fps, not game quality. But basically a game frame taking 16ms (60fps) would take 96ms (10fps) on an equivalent pascal GPU. You are kind of right in as much as this builds on something that has already existed, but going from 100+ms, 10fps, to the point where you can actually use it in games at 60-90fps is a massive jump from one generation to the next.

I'm actually interested to see, if you basically just use it for lighting / shadows and not for reflections, can you actually boost performance, as fake lights and shadows use up quite a lot of resources as well as looking fake. We might actually find that on its lowest settings RTX can actually offer a performance boost instead of being used to maximise IQ, by offloading something that was being done on cuda cores off to the RT cores.
 
Last edited:
Yes, but, as I sort of hinted at in my last post, the existing interactive raytracers do not use hybrid raster techniques casting rays only for reflections/specular/shadows or whatever rtx is doing. They render the entire scene using raytracing. This is orders of magnitude more compute intensive than what we've seen so far from the rtx implentations, and based on that I am still not entirely sold on the idea that this new method using tensor cores is really, truly a whole lot faster than existing cuda/opencl implementations.
Are you aware the RTX also boosts the speed of the existing interactive raytracers for the entire scene while doing the orders of magnitude more compute intensive workload?

The only real difference that matters between the RTX hardware and the old hardware is that RTX is a lot faster. All Ray tracing we could do before we can do faster via RTX. The old hardware was too slow for hybrid RT but the new RTX has hit the critical point in speed to be useable.
 
Back
Top Bottom