Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
The demo on Youtube looks distracting. And why is there wood and eyeball reflections? EYEBALL REFLECTIONS? WIND YOUR NECK IN IT IS AN ONLINE SHOOTER OMG!!!
You will be watching for that enemy to shoot i barely notice any reflection or hyper shadows nevermind eyeballs but out of touch (We dont play our own games) snake oil capitalists will try thier damm best to tell you Ray Tracing matters.
Anyways i am off to bed because the sheer stupidity of people hurts my brain in these situations. I guess i know now what they mean by one every minute because even if i boycott this people will still buy into it.
..I thought it was clear it would be exclusive to RT cards as you cannot run it on a none ray tracing card as useable speeds.
A good comparison would be early Direct3D, you didn't "need" a 3D accelerator to run D3D games (only OpenGL ones), you "could" run a D3D game on a 4MB S3 Virge if you wanted, however if you did then the transparency around objects would show as black boxes and it would generally look awful (worse than software rendering).
I explained that in the post you quoted. But yeah, there is a difference between "can't just run on existing hardware" and "is disabled in game on existing hardware because the result would be unusable". It makes sense that DICE would disable the option if the result wouldn't be viable as not doing so would just result in complaints.I want to know what's different about RTX or DXR that it now requires special hardware and can't just run on existing compute cores.
So, realtime raytracing has been around for years already in the industrial and content creation sectors. Usually termed "Interactive Raytracing" and all existing softwares use either cuda or opencl/metal. Certainly they are not rendering frames in 16ms on consumer level cards, but they are also casting many, many more rays compared to the hybrid raster/rt method that we're seeing here.
I want to know what's different about RTX or DXR that it now requires special hardware and can't just run on existing compute cores. Seems that early performance indicators of running RTX on turing's tensor cores is hardly a massive departure from what we already had in existing compute accelerated raytracers.
It can't be the denoising, as AMD already have that running on opencl too.
Won't ray tracing be pointless/laggy when VR kicks off more and takes more of the market share
A few things. First games don't need as many rays per pixels but they do need more FPS. While you can get away with Interactive Raytracing at 1 FPS or less in the industrial that's useless for games that need 30 to 60+ FPS. It doesn't require special hardware to run its just the special hardware is massively faster at games and faster at the industrial "Interactive Raytracing". This special hardware has not only made RT practical for games but it massively speeds up industrial ray tracing. Its not that it cannot run on existing compute cores its just that those cores are so slow some devs have decided to not allow it via software. In some cases you can choose to run the RTX games/app on older existing compute cores but its not a good experience due to the slow speeds.So, realtime raytracing has been around for years already in the industrial and content creation sectors. Usually termed "Interactive Raytracing" and all existing softwares use either cuda or opencl/metal. Certainly they are not rendering frames in 16ms on consumer level cards, but they are also casting many, many more rays compared to the hybrid raster/rt method that we're seeing here.
I want to know what's different about RTX or DXR that it now requires special hardware and can't just run on existing compute cores. Seems that early performance indicators of running RTX on turing's tensor cores is hardly a massive departure from what we already had in existing compute accelerated raytracers.
It can't be the denoising, as AMD already have that running on opencl too.
What do you think of Nvidia's efforts here with ray tracing, Pottsey; are you impressed? Do you think there is a chance we could get 60+ FPS at 1440P with optimisation? I'd be interested to know how this compares to what Imagination were doing.A few things. First games don't need as many rays per pixels but they do need more FPS. While you can get away with Interactive Raytracing at 1 FPS or less in the industrial that's useless for games that need 30 to 60+ FPS. It doesn't require special hardware to run its just the special hardware is massively faster at games and faster at the industrial "Interactive Raytracing". This special hardware has not only made RT practical for games but it massively speeds up industrial ray tracing. Its not that it cannot run on existing compute cores its just that those cores are so slow some devs have decided to not allow it via software. In some cases you can choose to run the RTX games/app on older existing compute cores but its not a good experience due to the slow speeds.
Yes it’s impressive and yes in theory we could get 60fps at 1440p. The drivers are not optimised; the software games are a quick patch, in some cases less than 2 weeks of worth of coding without being optimised. The devs have also not had chance to learn how to code best for RT yet and no one has built a game from the ground up with RT in mind. So far it’s all been bolt ons. There is for sure room for speed improvements. It all comes down to how the devs use and implant RT. I still expect after all the above there will still be some slow cases.What do you think of Nvidia's efforts here with ray tracing, Pottsey; are you impressed? Do you think there is a chance we could get 60+ FPS at 1440P with optimisation? I'd be interested to know how this compares to what Imagination were doing.
A few things. First games don't need as many rays per pixels but they do need more FPS. While you can get away with Interactive Raytracing at 1 FPS or less in the industrial that's useless for games that need 30 to 60+ FPS. It doesn't require special hardware to run its just the special hardware is massively faster at games and faster at the industrial "Interactive Raytracing". This special hardware has not only made RT practical for games but it massively speeds up industrial ray tracing. Its not that it cannot run on existing compute cores its just that those cores are so slow some devs have decided to not allow it via software. In some cases you can choose to run the RTX games/app on older existing compute cores but its not a good experience due to the slow speeds.
Yes, but, as I sort of hinted at in my last post, the existing interactive raytracers do not use hybrid raster techniques casting rays only for reflections/specular/shadows or whatever rtx is doing. They render the entire scene using raytracing. This is orders of magnitude more compute intensive than what we've seen so far from the rtx implentations, and based on that I am still not entirely sold on the idea that this new method using tensor cores is really, truly a whole lot faster than existing cuda/opencl implementations.
Yeah, a little voice in the back of my head was telling me I was using the wrong terminology! My bad.RTX isn't using tensor cores, DLSS uses tensor, RTRT uses RT cores.
WIND YOUR NECK IN IT
Yeah, a little voice in the back of my head was telling me I was using the wrong terminology! My bad.
Still, my point stands - when you consider it objectively against what already exists it doesn't seem all that impressive.
I'm just not seeing how this is particularly groundbreaking?
Well considering VR's been kicking off for over 20 years now and it's still regarded as nothing more than a gimmick by the mainstream, I wouldn't worry too much about it.Won't ray tracing be pointless/laggy when VR kicks off more and takes more of the market share
DLSS is part of RTX, it's why half of the RTX games have no ray tracing but are still RTX games because they're using DLSS.RTX isn't using tensor cores, DLSS uses tensor, RTRT uses RT cores.
DLSS is part of RTX, it's why half of the RTX games have no ray tracing but are still RTX games because they're using DLSS.
As for ray tracing, Nvidia mention the ray tracing denoiser which looks like it can or is done on the tensor cores *shrugs*
RTX isn't using tensor cores, DLSS uses tensor, RTRT uses RT cores.
Are you aware the RTX also boosts the speed of the existing interactive raytracers for the entire scene while doing the orders of magnitude more compute intensive workload?Yes, but, as I sort of hinted at in my last post, the existing interactive raytracers do not use hybrid raster techniques casting rays only for reflections/specular/shadows or whatever rtx is doing. They render the entire scene using raytracing. This is orders of magnitude more compute intensive than what we've seen so far from the rtx implentations, and based on that I am still not entirely sold on the idea that this new method using tensor cores is really, truly a whole lot faster than existing cuda/opencl implementations.