• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Crytek demo DXR on Vega 56

Can't see how, but it would be rather funny if AMD can simulate ray tracing to the point where there's imperceivable difference in visual quality, and even funnier if there's a smaller performance hit :-)

Would be quite awkward for Nvidia - I wonder how they would play it off?
 
The narrative from Nvidia is DXR needs their hardware, which isn't true at all. So why do you think its only simulated?
 
Technology Reveal: Real-Time Ray Traced Reflections achieved with CRYENGINE. All scenes are rendered in real-time in-editor on an AMD Vega 56 GPU. Reflections are achieved with the new experimental ray tracing feature in CRYENGINE 5 - no SSR.

Neon Noir was developed on a bespoke version of CRYENGINE 5.5., and the experimental ray tracing feature based on CRYENGINE’s Total Illumination used to create the demo is both API and hardware agnostic, enabling ray tracing to run on most mainstream, contemporary AMD and NVIDIA GPUs. However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12.

 
It's not proprietary to Nvidia. Microsoft DXR (or whatever it's called) is vendor neutral. (If that's what's in use here)

I think the compute units on the 2000 series are only for acceleration.
 
Some heavy blurring in that video, I think that's how they're managing to do this on a V56.

It leads to me to think about what Metro Exodus devs were saying about RT on non-RTX hardware, maybe even consoles, was more prescient than initially thought. Perhaps they knew something about this.

it doesn't really matter - be it dedicated hardware or just enough compute power to do it in shader units, I believe it would be viable. For the current generation - yes, multiple solutions is the way to go.
This is also a question of how long you support a parallel pipeline for legacy PC hardware. A GeForce GTX 1080 isn't an out of date card as far as someone who bought one last year is concerned. So, these cards take a few years to phase out and for RT to become fully mainstream to the point where you can just assume it. And obviously on current generation consoles we need to have the voxel GI solution in the engine alongside the new RT solution. RT is the future of gaming, so the main focus is now on RT either way.

In terms of the viability of RT on next generation consoles, the hardware doesn't have to be specifically RTX cores. Those cores aren't the only thing that matters when it comes to ray tracing. They are fixed function hardware that speed up the calculations specifically relating to the BVH intersection tests. Those calculations can be done in standard compute if the computer cores are numerous and fast enough (which we believe they will be on the next gen consoles). In fact, any GPU that is running DX12 will be able to "run" DXR since DXR is just an extension of DX12.

Other things that really affect how quickly you can do ray tracing are a really fast BVH generation algorithm, which will be handled by the core APIs; and really fast memory. The nasty thing that ray tracing does, as opposed to something like say SSAO, is randomly access memory. SSAO will grab a load of texel data from a local area in texture space and because of the way those textures are stored there is a reasonably good chance that those texels will be quite close (or adjacent) in memory. Also, the SSAO for the next pixel over will work with pretty much the same set of samples. So, you have to load far less from memory because you can cache and awful lot of data.

Working on data that is in cache speeds things up a ridiculous amount. Unfortunately, rays don't really have this same level of coherence. They can randomly access just about any part of the set of geometry, and the ray for the next pixels could be grabbing data from and equally random location. So as much as specialised hardware to speed up the calculations of the ray intersections is important, fast compute cores and memory which lets you get at you bounding volume data quickly is also a viable path to doing real-time RT.

And what are Vegas strengths if not exactly that? It's a brave new world gents. :cool:
 
On an entirely unrelated note I'm positive that Division 2 is utilising the HDR tone-mapping tech as I'm finding it looks far better than the nVidia implementation (that looks rough and harsher in comparison). Helps that there's a menu option for it I suppose!! Massive/AMD have done a really good job on it though imo, it's smooth and better than mere benchmarks would indicate :)

It's these small QoL changes that really make them stand out atm.
 
Back
Top Bottom