• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia v RADEON Aging, Future Ray Tracing, FSR, Switch OLED | Lead Game Artist | Broken Silicon 109

Caporegime
Joined
12 Jul 2007
Posts
40,543
Location
United Kingdom
If you don't like MLID, skip this one.


I found it very interesting to listen to the Lead Game Artists opinions on the various subjects discussed, timestamps in the video if you want the TL : DR.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,150
Shame Bryan wasn't interviewed by someone else to be brutally honest - he had a lot of interesting insights but MLID wasn't really getting the best out of it, either just grunting when there was more of a point to tease out and/or interrupting with a kind of irrelevant tangent or to put words in his mouth.

Some good points about the way dedicated ray tracing hardware like nVidia use always adds to your frame time, comparing to a traditional approach, as it needs most of the frame processed already to have the data to do its job but at the same time you really need that kind of dedicated hardware to get the performance needed for higher end use of ray tracing. As he mentioned would be good if there was some way to use that hardware earlier in the pipeline when it is sitting idle to somehow accelerate processing the frame. But without some crazy context switching approach you'll never leverage compute shaders to that level.
 
Soldato
Joined
6 Feb 2019
Posts
17,594
Shame Bryan wasn't interviewed by someone else to be brutally honest - he had a lot of interesting insights but MLID wasn't really getting the best out of it, either just grunting when there was more of a point to tease out and/or interrupting with a kind of irrelevant tangent or to put words in his mouth.

Some good points about the way dedicated ray tracing hardware like nVidia use always adds to your frame time, comparing to a traditional approach, as it needs most of the frame processed already to have the data to do its job but at the same time you really need that kind of dedicated hardware to get the performance needed for higher end use of ray tracing. As he mentioned would be good if there was some way to use that hardware earlier in the pipeline when it is sitting idle to somehow accelerate processing the frame. But without some crazy context switching approach you'll never leverage compute shaders to that level.

Microsoft is currently making games with ray traced audio I wonder if the RT hardware can accelerate it
 
Associate
Joined
1 Oct 2009
Posts
1,033
Location
Norwich, UK
Shame Bryan wasn't interviewed by someone else to be brutally honest - he had a lot of interesting insights but MLID wasn't really getting the best out of it, either just grunting when there was more of a point to tease out and/or interrupting with a kind of irrelevant tangent or to put words in his mouth.

Some good points about the way dedicated ray tracing hardware like nVidia use always adds to your frame time, comparing to a traditional approach, as it needs most of the frame processed already to have the data to do its job but at the same time you really need that kind of dedicated hardware to get the performance needed for higher end use of ray tracing. As he mentioned would be good if there was some way to use that hardware earlier in the pipeline when it is sitting idle to somehow accelerate processing the frame. But without some crazy context switching approach you'll never leverage compute shaders to that level.

Agree with this take. My guess would be that RT ops being done near the end of the pipeline is necessary for the architecture, at least if the rays are being used on visual effects that rely on sampling the outcome of rasterization. Also if you're swapping effects from rasterization to ray tracing like global illumination, while you add to the pipeline at the end you also save from the classic rendering time. It wont be equal in cost in total frame time, but then RT is adding that additional fidelity and so no matter what things are going to be more expensive. It's a miracle they got real time ray tracing working in any capacity this early on in real time rendering. I'd also bet if RT cores spend time idle waiting for rasterization finish that it means less power draw and temps and so higher clocks. If you have async performance on them I'd expect final products would be hotter and performance be lower, kind of analogous to AVX instructions on modern CPUs. You can bench a modern CPU at very fast speeds but if you bench with software using AVX instructions those same clock speeds will fail miserably and that's why almost everyone overclocking uses AVX offsets on the multi.

I took the side of MLID in a lot of those topics on hardware, end users don't care about the details, they care about the experience a product provides. A follow up question I would have is what alternatives would you use that additional frame time on anyway? This has been my position on RT for a while now, it's fine to pan it for its performance cost and question if the tradeoff for visuals is worth it. But what else are you going to do with those transistors if you say ripped out the RT cores. What's the rasterization replacement, more fakery which requires even more development time by artists to hand craft? I'd be more inclined to buy into those lines of arguments if people were saying, here look this is the cool thing we could do otherwise, and show us some new pretty things with rasterization. I think that well is running dry at this point.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,150
A follow up question I would have is what alternatives would you use that additional frame time on anyway? This has been my position on RT for a while now, it's fine to pan it for its performance cost and question if the tradeoff for visuals is worth it. But what else are you going to do with those transistors if you say ripped out the RT cores. What's the rasterization replacement, more fakery which requires even more development time by artists to hand craft? I'd be more inclined to buy into those lines of arguments if people were saying, here look this is the cool thing we could do otherwise, and show us some new pretty things with rasterization. I think that well is running dry at this point.

Yeah indeed. The time for faked approaches and hybrid ray tracing, etc. IMO was 10 years ago not what we should desire going forwards.
 
Associate
Joined
20 Nov 2020
Posts
1,120
Microsoft is currently making games with ray traced audio I wonder if the RT hardware can accelerate it
After playing a game with RT audio you will not enjoy these fake audio games we have right now. I wonder if DLSS can help too. :D
If you don't like MLID, skip this one.


I found it very interesting to listen to the Lead Game Artists opinions on the various subjects discussed, timestamps in the video if you want the TL : DR.
That was a great story with Nvidia removing the hardware scheduler and gaining more FPS on DX11. :D
 
Soldato
Joined
6 Feb 2019
Posts
17,594
After playing a game with RT audio you will not enjoy these fake audio games we have right now. I wonder if DLSS can help too. :D

Yes that is something rroff discussed in another thread, can AI upscale audio to save on resources and can a human notice the difference.

Microsoft haven't said how their ray traced audio games run, they just mentioned that it doesn't run on the CPU because it would take up far too many CPU resources to do so.

And you want to joke about it but ray traced audio is a big upgrade over even something like dolby atmos - instead of being limited by 2/5/7/11 channels, audio can virtually have a million channels from 360 degrees. The only difficult part is trying to convey that information over your headphones. This is probably Microsoft's attempt to compete with Sony's 360 3D audio it uses on the PS5 and for music, Sony uses a fixed function audio chip in the PS5 to calculate 3D audio.

Ray Traced audio also allows for new features, like real time true reverberation - notice how IRL if you walk into a big empty building it sounds hollow and sounds echo inside but you never really see this in games and some games that tried to emulate it are using pre recorded sounds. Also, most game sound today is pre recordings - with real time ray tracing you can produce accurate natural sounds based on the actual material of the asset in the game
 
Last edited:
Soldato
Joined
19 Oct 2004
Posts
4,213
Location
London
Yes that is something rroff discussed in another thread, can AI upscale audio to save on resources and can a human notice the difference.

Microsoft haven't said how their ray traced audio games run, they just mentioned that it doesn't run on the CPU because it would take up far too many CPU resources to do so.

And you want to joke about it but ray traced audio is a big upgrade over even something like dolby atmos - instead of being limited by 2/5/7/11 channels, audio can virtually have a million channels from 360 degrees. The only difficult part is trying to convey that information over your headphones. This is probably Microsoft's attempt to compete with Sony's 360 3D audio it uses on the PS5 and for music, Sony uses a fixed function audio chip in the PS5 to calculate 3D audio.

Ray Traced audio also allows for new features, like real time true reverberation - notice how IRL if you walk into a big empty building it sounds hollow and sounds echo inside but you never really see this in games and some games that tried to emulate it are using pre recorded sounds. Also, most game sound today is pre recordings - with real time ray tracing you can produce accurate natural sounds based on the actual material of the asset in the game

you only have 2 ears so headphones are fine. In real live your brain interprets volume, delay and pitch to work out direction and speed.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,150
After playing a game with RT audio you will not enjoy these fake audio games we have right now.

I've never enjoyed audio in games since hearing a demo of a sadly never released version of Aureal A3D with full material simulation, etc. on a modified Quake 2 engine (there is a mod of Quake 2 with A3D support but it doesn't include the updated code).
 
Soldato
Joined
12 May 2014
Posts
5,236
you only have 2 ears so headphones are fine. In real live your brain interprets volume, delay and pitch to work out direction and speed.
It was fascinating in the road to PS5 event when cerny mentioned how the shape of your ears aid in acheiving our 360 degree hearing. I wonder if you could transplant one persons ear onto another person would they need to relearn this? I assume that as we grow our brain builds the alogorithm for this and modifies it as our ears grows with us.

I know right, 50% extra performance just isn't worth all that time they could spend on marketing instead of optimising their crappy game engine
I would recommend watching the video they had an interesting section on this topic :D
TLDR Drawcall limits for certain games.


I was interested in his desire for more TMUs and shaders (i think) in GPUs because he wanted to be able to push more objects on scene. I think this could be quite interesting and help to achieve scenes that were less barron. Game studios will need to increase in size though.

How many variations of horses does RDR2 have that they need 4 horse guys?
 
Associate
Joined
1 Oct 2009
Posts
1,033
Location
Norwich, UK
Thinking back to the the PS5 tech talk, the presenter talked about Ray Tracing hardware in the PS5 and what you can do with it, and listed audio as one of the things lower down on the list in terms of performance cost. I don't think you need to cast very many rays to radically improve directional audio. Yours ears can pinpoint direction reasonably well but no where near as much detail as your eyes.
 
Permabanned
Joined
31 Aug 2013
Posts
3,364
Location
Scotland
Dev’s thoughts on Ray Tracing, AMD vs Nvidia Hardware Scheduling ...

They discuss the problem with RT being it adds time to the frame at the end. They agree RT cores sit doing nothing until the frame is ready for RT to be added in. Well that's not true as Ampere can run the RT cores concurrently with compute/graphics. What this means is that work can begin on the next frame while RT completes the current frame. E.g., for simplicity, if a frame required 8ms graphics and 8ms RT, the first frame would take 16ms, but subsequent frames would only require 8ms. Feel free to correct me on this, but I think the artist and youtuber are wrong.

RDR2's DLSS patch is the perfect example of why we should be demanding RT as RDR2 to me looks very dated.

At 1440p with everything maxed using an undervolted 3080(900mV/1920MHz) at 1440p -

Code:
DLSS       OFF   Quality
Min    37.3913   28.0293
Max    62.0109  119.305
Avg    48.9649   84.4729


Not sure why the minimums are so low as the lowest i've noticed with DLSS Quality is 59. Perhaps it's just down to loading at the scene change?

I'm still using an Ivy Bridge 3770k, 32GB DDR3, 1*512GB SATA SSD and 4*4TB SATA SSD.

Great timing with the DLSS patch as I just picked up a 32GP850-B panel last week :)

I'll admit to playing a lot of Metro Exodus Enhanced at the moment having not long finished Control Ultimate Edition thus my expectations are above average, but I have frames to burn even with this ancient system, which I would like to burn on RT GI at the very least.

 
Last edited:
Permabanned
Joined
31 Aug 2013
Posts
3,364
Location
Scotland
I've never enjoyed audio in games since hearing a demo of a sadly never released version of Aureal A3D with full material simulation, etc. on a modified Quake 2 engine (there is a mod of Quake 2 with A3D support but it doesn't include the updated code).

I bought a R9 290 in the hope AMD would develop TrueAudio further.
 
Back
Top Bottom