• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel demos XeSS super resolution -open-AI-AMD/Nvidia can use it

The additional performance hit on non-Arc GPUs looks very minor though, assuming Intel's charts are to be believed (hey, there's always a first time). It would still provide a massive performance uplift compared to native resolution rendering. If everything Intel are saying about this is true, DLSS is going the way of G-Sync modules. There'd be no reason (apart from Nvidia bribes) for developers to implement that over XeSS, given the latter works with far more GPUs.
 
Not a fan of these upscaling technologies but does look like a competent attempt with no glaring issues in the limited demo.
 
I've tried reading but I guess I don't really understand how AMD or Intel's approaches really "work". From what I understand of DLSS, it takes a frame at a lower resolution and then employs the tensor cores along with an algorithm to fill in the gaps in the upscaling, so the performance impact of this would be if you wanted to use RT as well as the tensor cores are already busy doing the DLSS (if i'm understanding it correctly)?

But on the AMD/Intel front, what actually does the job of running the algorithm and filling in the gaps? Is it the CPU? GPU? RT cores? Where's the computational penalty for using it as something has to be doing the work.
 
I've tried reading but I guess I don't really understand how AMD or Intel's approaches really "work". From what I understand of DLSS, it takes a frame at a lower resolution and then employs the tensor cores along with an algorithm to fill in the gaps in the upscaling, so the performance impact of this would be if you wanted to use RT as well as the tensor cores are already busy doing the DLSS (if i'm understanding it correctly)?

But on the AMD/Intel front, what actually does the job of running the algorithm and filling in the gaps? Is it the CPU? GPU? RT cores? Where's the computational penalty for using it as something has to be doing the work.
DLSS didn't even run on the tensor cores before 2.0. It ran on the shader cores. Now, DLSS was pretty mediocre before 2.0, but that doesn't mean to say it was mediocre because it ran on the shader cores. Nvidia also changed a bunch of other stuff to improve it with the move to 2.0, not just shifting the work to the tensor cores. So really you only have Nvidia's word to go on that DLSS 2.0 couldn't work on the shader cores too, and now Intel are saying that it (or something like it) can. Given Nvidia's long, storied history of proprietary technologies and doing everything they can to gain an advantage through features that only work on their hardware, I don't think the idea that the tensor cores aren't required is much of a stretch.
 
DLSS didn't even run on the tensor cores before 2.0. It ran on the shader cores. Now, DLSS was pretty mediocre before 2.0, but that doesn't mean to say it was mediocre because it ran on the shader cores. Nvidia also changed a bunch of other stuff to improve it with the move to 2.0, not just shifting the work to the tensor cores. So really you only have Nvidia's word to go on that DLSS 2.0 couldn't work on the shader cores too, and now Intel are saying that it (or something like it) can. Given Nvidia's long, storied history of proprietary technologies and doing everything they can to gain an advantage through features that only work on their hardware, I don't think the idea that the tensor cores aren't required is much of a stretch.
That, plus of course the tensor cores were developed for Nvidia's big new cash cow, AI. So being able to design something for AI and re-use it for gaming has it's advantages to them.

On the other hand, it bloated their dies and they have lost the perf/watt crown too (although TSMC's 7nm is better than the cheaper Samsung 8nm they use for consumer Ampere).

As for AI: Tensor cores have versatility, but a lot of the newest AI chips have gone full custom fixed function silicon so no guarantees. Nvidia's software tools are pretty good though not that Google et all care if are willing to develop their own custom silicon for some of AI tasks Nvidia have previously sold into.
 
The additional performance hit on non-Arc GPUs looks very minor though, assuming Intel's charts are to be believed (hey, there's always a first time). It would still provide a massive performance uplift compared to native resolution rendering. If everything Intel are saying about this is true, DLSS is going the way of G-Sync modules. There'd be no reason (apart from Nvidia bribes) for developers to implement that over XeSS, given the latter works with far more GPUs.
Looks interesting, curious to see where things are at once its available.
 
This is going to kill FSR and DLSS on PC if AMd/Nvidia don't sort out their shot

XeSS takes the best of amd and Nvidia and combines it: you get an open source upscaled that works on any GPu and it's accelerated by AI cores like dlss.

For amd to compete they will quickly need to come up with FSR 2.0 that is accelerated by AI cores and for Nvidia they need make DLSS open source and able to run on any Gpu
 
Looking at the uarch slides, it seems intel has XMX matrix cores engines which appear geared towards the XeSS work. A little bit similar to tensor cores I guess.

Whilst I don't see matrix cores in RDNA2 consumer cards at the moment, I see that AMD have them in MI100's ... so there is every possibility something could make its way into AMD ?

If so, AMD and Intel could effectively share the benefits of each other's work, which would in my mind only be a good thing for gamers going forward.

Whilst I get that 4K is often touted as the golden child of gaming at the moment, in the current climate, I think a lot of people would be happy with a card that can push really good 1440p through the use of XeSS type tech rendering at 1080p. So if Intel can bring to the table a really good 1080 to 1440p performer as a reasonable price point, it could make an attractive option for many who dont want to pay the massive premium currently affecting GPU's.
 
This is great news for consumers/gamers no matter how you look at it. The performance uplift looks phenomenal and it's open source so all the more reason not to go with Nvidia's proprietary solution. Competition from Intel in this space is very welcome and that coupled with analysts concern about potential over supply of Nvidia GPUs and a glut as mining comes to an end can only mean a golden age for pricing pretty soon.
 
Imagine AMD and Intel have plans in place to help each other with open source. I mean remember AMD had a VEGA GPU inside an Intel CPU before that nobody ever thought you would see the two companies come together.

AMD is a big fan of open source and Intel have their own history of open source software.

"For more than two decades, Intel has employed thousands of software engineers around the world to ensure open source software delivers, top notch performance, scalability, power efficiency and security on Intel platforms—across servers, desktops, mobile devices and embedded systems."
https://01.org/about
 
DLSS didn't even run on the tensor cores before 2.0. It ran on the shader cores. Now, DLSS was pretty mediocre before 2.0, but that doesn't mean to say it was mediocre because it ran on the shader cores. Nvidia also changed a bunch of other stuff to improve it with the move to 2.0, not just shifting the work to the tensor cores. So really you only have Nvidia's word to go on that DLSS 2.0 couldn't work on the shader cores too, and now Intel are saying that it (or something like it) can. Given Nvidia's long, storied history of proprietary technologies and doing everything they can to gain an advantage through features that only work on their hardware, I don't think the idea that the tensor cores aren't required is much of a stretch.

So:

DLSS uses Tensor Cores (but used to use shaders)
XeSS uses their version of Tensor Cores
FXSR uses Shaders?

So surely the performance hit from FXSR is bigger given that it's using raw GPU performance to achieve what it does? But does that hit mean it would be less of a performance penalty to just run the game at a lower resolution and rely on monitor upscaling? Would AMD not get better performance from FXSR if they adapted it to use the RT cores on the 6000 series?

Ugh, why can't nVidia just open this up and integrate it into the DirectX/Vulkan specifications and we can be free of multiple different technologies. :(
 
So:

DLSS uses Tensor Cores (but used to use shaders)
XeSS uses their version of Tensor Cores
FXSR uses Shaders?

So surely the performance hit from FXSR is bigger given that it's using raw GPU performance to achieve what it does? But does that hit mean it would be less of a performance penalty to just run the game at a lower resolution and rely on monitor upscaling? Would AMD not get better performance from FXSR if they adapted it to use the RT cores on the 6000 series?

Ugh, why can't nVidia just open this up and integrate it into the DirectX/Vulkan specifications and we can be free of multiple different technologies. :(
FSR isn't really comparable to the other two. It's a much more basic upscaling and sharpening filter with no temporal element. Any GPU can do that without breaking a sweat, which is why FSR works on basically everything and has little performance hit. It still produces a notably better image than just running at a lower resolution and letting your monitor upscale things though. DLSS and XeSS are much more advanced, using AI and a trained neural network to reconstruct a lower resolution image into a higher resolution one, with a temporal element that FSR completely lacks. XeSS uses Intel's XMX math units when running on one of their cards, but is also capable of running on shader cores on competitor hardware. Intel released a slide yesterday that suggested that whilst this means XeSS will work best and give the most performance uplift on Arc cards, the difference between the two isn't huge and it'll still provide a massive benefit even when running on other hardware.

intelarchitectureday2wbjgo.jpg
The 'DP4a' entry there is essentially representing it running on an AMD or Nvidia card. As you can see, it takes roughly twice as long to do the same work without XMX acceleration, but in the grand scheme it's still a very small hit and way faster than rendering a native frame (a 4K one in this example). So you'll still see a large performance uplift with XeSS using it on non-Intel cards. Again, assuming everything Intel is saying proves to be accurate. In terms of a deep dive on how it (and DLSS) works, this article covers it well:

https://www.anandtech.com/show/1689...intel-unveils-xess-image-upscaling-technology

Intel are also planning to release everything related to XeSS under an open source license, so if it proves to live up to the hype, that'll be just as good as Nvidia doing the same with DLSS. Better even, since XeSS is designed to work on all kinds of hardware from the off, not just Nvidia GPUs.
 
"no visible quality loss." What utter tosh. You can clear see the difference between 4K and XeSS 4K in their own damn demo. Really irritated that they chose to use that line.

It looks very good in the slow mo shots, but don't oversell it FFS.
 
Confirmed to also work on nvidia and amd GPUs but will have a more performance hit vs Intel.


Top work Intel, and open source, anyone can use it, Hurray!

Nothing against Nvidia per se, but, i hate the way everything they do is proprietary, how everything they do is to try and lock you in to their eco system, so i'm always happy when someone else does the same thing just as well and makes it available to everyone, in other words i like seeing others peeing on Nvidia's proprietary camp-fire.

You're a star Intel.
 
Back
Top Bottom