• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RTX performance overhead - 9.2ms (ouch!)

Take a deep breath, and relax. You're projecting a lot of things unto what I said.

If you think the CUDA/Tensor implementation tells us nothing at all, then you've lost track of what words mean. Clearly it does tell us something, especially because we also have data points where that exact implementation is compared to the one with RT cores, it's called the SW demo and you can very clearly see the performance differential. If you want to hand-wave that away as "nothing" and instead rely solely on your imagination, feel free. I'll stick to the data available.

If you were just alluding to the fact the data is there, then that wouldn't be an issue. You're not, though. You're putting emphasis on the fact they've been working for months on features as if that has any bearing on the outcome by intentionally leaving your post hanging.


If we use an analogy going back when motion engine games were being written for the PS2 on PC, and saw it rendering at 12 frames per second, those of us who are open-minded wouldn't instantly assume this infers the final game is going to perform the same way.


We know that Tensor Cores are intrinsically linked in the process, but with rays not being cast on the RT cores, it really doesn't leave us with any real idea of performance, as this is the most computationally intensive part of the process.


Three days with the Turing cards, but they worked on the implementation for months with Titan Vs.
 
All this talk of Titan V's, they don't have the RT cores that Turing has anyway, so any development done on the Titan V's would be useless as it probably wouldn't translate to the new Turing RT cores.
 
No opinion on final RT performance but there is no way Devs only had 3 days to 2 weeks to implement their demos IMO, they'll have been supplied some sort of code/hardware to play with.
 
Genuinely excited for RT and can't bloody wait to give it a go. Probably expecting a bit much with my res of 3440x1440 but hopefully I can drop some settings to get decent performance and games like Tomb Raider, I would quite happily play at ~45 fps.
 
No opinion on final RT performance but there is no way Devs only had 3 days to 2 weeks to implement their demos IMO, they'll have been supplied some sort of code/hardware to play with.
Yes, they were supplied Titan Vs which can use a slow software raytracing using the CUDA cores, no hardware acceleration. Turing is around 6 to10x faster, but the hardware is entirely different. So you can't optimize on Volta and expect the same optimizations to work on Turing, and you have no idea what new optimizations are possible on Turing.
 
All this talk of Titan V's, they don't have the RT cores that Turing has anyway, so any development done on the Titan V's would be useless as it probably wouldn't translate to the new Turing RT cores.

No opinion on final RT performance but there is no way Devs only had 3 days to 2 weeks to implement their demos IMO, they'll have been supplied some sort of code/hardware to play with.

The development on the Titan V would have been very beneficial as denoising is critical to getting performance to any sort of acceptable levels as it reduces the number of Rays needed per frame.

I think people have the wrong idea when they talk about optimisations with regard to getting Ray Tracing working on Turing cards.
 
The development on the Titan V would have been very beneficial as denoising is critical to getting performance to any sort of acceptable levels as it reduces the number of Rays needed per frame.

I think people have the wrong idea when they talk about optimisations with regard to getting Ray Tracing working on Turing cards.


The denoising is entirely irrelevant, that is really just an API call, there is no optimization to do.
 
Yeah - puts cards like the 2080 in an odd place as only the Ti is remotely feasible for using the effects to a level worth it over more traditional techniques.

Standard GPU practice though innit, roll out new features that are technically 'supported' across the range but don't always perform well across the board. Kinda like Geforce FX series and DX9, although to be fair they weren't charging the sort of money being asked for here.
 
Back
Top Bottom