Soldato
Looks good in this demo video
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
DLSS didn't even run on the tensor cores before 2.0. It ran on the shader cores. Now, DLSS was pretty mediocre before 2.0, but that doesn't mean to say it was mediocre because it ran on the shader cores. Nvidia also changed a bunch of other stuff to improve it with the move to 2.0, not just shifting the work to the tensor cores. So really you only have Nvidia's word to go on that DLSS 2.0 couldn't work on the shader cores too, and now Intel are saying that it (or something like it) can. Given Nvidia's long, storied history of proprietary technologies and doing everything they can to gain an advantage through features that only work on their hardware, I don't think the idea that the tensor cores aren't required is much of a stretch.I've tried reading but I guess I don't really understand how AMD or Intel's approaches really "work". From what I understand of DLSS, it takes a frame at a lower resolution and then employs the tensor cores along with an algorithm to fill in the gaps in the upscaling, so the performance impact of this would be if you wanted to use RT as well as the tensor cores are already busy doing the DLSS (if i'm understanding it correctly)?
But on the AMD/Intel front, what actually does the job of running the algorithm and filling in the gaps? Is it the CPU? GPU? RT cores? Where's the computational penalty for using it as something has to be doing the work.
That, plus of course the tensor cores were developed for Nvidia's big new cash cow, AI. So being able to design something for AI and re-use it for gaming has it's advantages to them.DLSS didn't even run on the tensor cores before 2.0. It ran on the shader cores. Now, DLSS was pretty mediocre before 2.0, but that doesn't mean to say it was mediocre because it ran on the shader cores. Nvidia also changed a bunch of other stuff to improve it with the move to 2.0, not just shifting the work to the tensor cores. So really you only have Nvidia's word to go on that DLSS 2.0 couldn't work on the shader cores too, and now Intel are saying that it (or something like it) can. Given Nvidia's long, storied history of proprietary technologies and doing everything they can to gain an advantage through features that only work on their hardware, I don't think the idea that the tensor cores aren't required is much of a stretch.
Looks interesting, curious to see where things are at once its available.The additional performance hit on non-Arc GPUs looks very minor though, assuming Intel's charts are to be believed (hey, there's always a first time). It would still provide a massive performance uplift compared to native resolution rendering. If everything Intel are saying about this is true, DLSS is going the way of G-Sync modules. There'd be no reason (apart from Nvidia bribes) for developers to implement that over XeSS, given the latter works with far more GPUs.
DLSS didn't even run on the tensor cores before 2.0. It ran on the shader cores. Now, DLSS was pretty mediocre before 2.0, but that doesn't mean to say it was mediocre because it ran on the shader cores. Nvidia also changed a bunch of other stuff to improve it with the move to 2.0, not just shifting the work to the tensor cores. So really you only have Nvidia's word to go on that DLSS 2.0 couldn't work on the shader cores too, and now Intel are saying that it (or something like it) can. Given Nvidia's long, storied history of proprietary technologies and doing everything they can to gain an advantage through features that only work on their hardware, I don't think the idea that the tensor cores aren't required is much of a stretch.
FSR isn't really comparable to the other two. It's a much more basic upscaling and sharpening filter with no temporal element. Any GPU can do that without breaking a sweat, which is why FSR works on basically everything and has little performance hit. It still produces a notably better image than just running at a lower resolution and letting your monitor upscale things though. DLSS and XeSS are much more advanced, using AI and a trained neural network to reconstruct a lower resolution image into a higher resolution one, with a temporal element that FSR completely lacks. XeSS uses Intel's XMX math units when running on one of their cards, but is also capable of running on shader cores on competitor hardware. Intel released a slide yesterday that suggested that whilst this means XeSS will work best and give the most performance uplift on Arc cards, the difference between the two isn't huge and it'll still provide a massive benefit even when running on other hardware.So:
DLSS uses Tensor Cores (but used to use shaders)
XeSS uses their version of Tensor Cores
FXSR uses Shaders?
So surely the performance hit from FXSR is bigger given that it's using raw GPU performance to achieve what it does? But does that hit mean it would be less of a performance penalty to just run the game at a lower resolution and rely on monitor upscaling? Would AMD not get better performance from FXSR if they adapted it to use the RT cores on the 6000 series?
Ugh, why can't nVidia just open this up and integrate it into the DirectX/Vulkan specifications and we can be free of multiple different technologies.
The 'DP4a' entry there is essentially representing it running on an AMD or Nvidia card. As you can see, it takes roughly twice as long to do the same work without XMX acceleration, but in the grand scheme it's still a very small hit and way faster than rendering a native frame (a 4K one in this example). So you'll still see a large performance uplift with XeSS using it on non-Intel cards. Again, assuming everything Intel is saying proves to be accurate. In terms of a deep dive on how it (and DLSS) works, this article covers it well:
The same reason Nvidia took their own sweet time to support Freesync.So:
Ugh, why can't nVidia just open this up and integrate it into the DirectX/Vulkan specifications and we can be free of multiple different technologies.
Confirmed to also work on nvidia and amd GPUs but will have a more performance hit vs Intel.