• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel demos XeSS super resolution -open-AI-AMD/Nvidia can use it

If amd had promoted it first, odds are it wouldn't have went anywhere as they've introduced some stuff in the past, it gets used in a game or 2 then vanishes, tru audio being one good example. The only "hate" for ray tracing is the performance hit and the current gimmicks in place to upscale lower res images to make ray tracing usable at all. Especially considering the end result in a lot of cases barely looks any different from raster methods.

Yup, AMD didn't bother much to promote and support its tech for the PC. BF4 was the exception with Mantle. Even with True Audio in Thief it was enabled or disabled based on what driver you would use.

I think Ray Tracing was already discussed about since is in the new consoles and DX, so nVIDIA was aware, basically they've just stole the thunder from the new consoles with an early implementation.

Ray tracing it looks barely different due to developers, not the tech. That's my point.

Same can be said about Nvidia pushing developers to not implement better non-RT techniques in games to make RT look better in comparison.

Perhaps, I wouldn't be surprised, however devs normally don't care much beyond the console tech anyway.

Honestly couldn't give a monkeys about it one way or the other, it's simply a crutch for hardware not having the required grunt for a feature that came out a few years earlier than it should have. Don't care who the vendor is pushing it, their "better than native" pish is just that, pish.

You can always play those hybrid games with RT off and enjoy the higher FPS from upscalling techniques. Players with older cards can postpone futher the need to upgrade, so bring it on - even more now since the prices of GPUs are just ridiculous.

Right now I play (again), No Man Sky with DLSS as I need a bit more peformance at higher resolutions. Don't know if this one has RT.
 
@Calin Banc when you're 20% of the market with your rival 80% its difficult to get your technology established, no one is interested, and why should they be? You're not relevant.

Mantle however did work, the intention for that was always for it to be open source, so everyone could use it, with AMD in control that was never going to happen, so AMD did the only thing that would make it happen, they gave it away to the Kronos Group.
 
@humbug , Intel starts from 0% (in dedicated GPU market), so this is nice first step on how to get attention.
AMD could have always motivated devs to use its tech. Chose not to.

I remember AMD making a lot of noise about True Audio and a lot of people, including tech journalists laughing at them.

Same but to a lesser extent with Mantle, people like Ryan Shrout making hit piece videos on it deliberately designed to make it look like it made absolutely no difference and then concluding that, and AMD paid him to do one of those articles, the only mistake AMD made with that was not to absolutely chastise him for that crap, publicly.

Not one tech journalists did any real testing on Mantle. So i did.

 
@humbug I know about Mantle, I could play at @60fps in Eyefinty on a R290 due to it. I know the BS with testing with ultra settings SP secenarios and declaring there is no change. That is why I've mentioned Mantle, it was big since it was in BF4.

But look at demos such as Froblins which accelerated AIs on the GPU directly which never picked up, although it could have been huge for gaming.
The response to PhysiX was Bullet, which again, never really took off (at least in gaming). :)

LE: and a plus would be for the variable refresh rate tech, which is a good option for G-Sync, forgot about that one.
 
You're right, it was also my contention back then and for a long time after than AMD just weren't being aggressive enough in calling people out for their male bovine manure, a lot of it was going on at the time.

Or pushing their technology, AMD back then put a lot of effort in to new technologies to get noticed, BF4 was about the biggest advert for Mantle that you could get, they did try but i think they should have done a lot more, whatever it takes to get tech journalists on the same page because one knows their rivals did exactly that.
 
LE: and a plus would be for the variable refresh rate tech, which is a good option for G-Sync, forgot about that one.

Be nice if we got a proper update on the technology instead of something which is based on beating older technologies designed for other things into some form of adaptive sync - but I'm guessing a lot of the monitor manufacturers don't want to play ball on something which is "more effort".
 
Be nice if we got a proper update on the technology instead of something which is based on beating older technologies designed for other things into some form of adaptive sync - but I'm guessing a lot of the monitor manufacturers don't want to play ball on something which is "more effort".

You mean deliberately over engineered to create hardware lock-ins and a new revenue stream.

Well it didn't work.
 
You mean deliberately over engineered to create hardware lock-ins and a new revenue stream.

Well it didn't work.

Didn't it? I'm very happy Nvidia now support AMD cards on their newer G-Sync line. If I were to buy one of the high end displays, once I sort out a custom desk, I'd want a proper G-Sync module.
 
Didn't it? I'm very happy Nvidia now support AMD cards on their newer G-Sync line. If I were to buy one of the high end displays, once I sort out a custom desk, I'd want a proper G-Sync module.

I'm using a Philips 436M6 (G-Sync compatible) alongside a Dell S2716DG (G-Sync FPGA) currently - I'm generally pretty happy with the adaptive sync on the Philips but at the end of the day it is no G-Sync w/ FPGA and it is one of the better adaptive sync implementations aside from only having a 48-60Hz refresh range - some monitors are worse (including ones with the full refresh range).

I don't get how eagerly so many posters in this section settle for second best - not jumping on the nVidia G-Sync train fine but like with FSR, etc. so many seem to settle, with a lot of bigging it up, for an inferior version with very little demand for the technology being the best it can be and that makes no sense to me.

Adaptive sync for instance is based on pushing features like panel self refresh and other technologies unrelated to adaptive sync but happening to have some overlap with needed functionality into a type of use they weren't originally designed for and while it works there are a lot of improvements which could be made if it had been designed from the ground up towards these ends.
 
More info on XeSS

* The network does not require per game training, so like DLSS 2.0+ it's a plug and play solution for any game. Intel says it's also working on version 2 for XeSS.

* The XeSS neural network has been trained at 64 samples per pixel - that is 4 times higher pixel count than Nvidia uses for training, Intel says it's goal for XeSS is to be market leading and absolutely beat DLSS

* XeSS runs on any GPU as long as it has Shader model 6.4 or newer and you're using a version of windows that supports it - that means you need an up to date Windows 10 or Windows 11, for Nvidia it means any GPU from Pascal GTX1000 and higher is supported
 
Last edited:
More info on XeSS

* The network does not require per game training, so like DLSS 2.0+ it's a plug and play solution for any game. Intel says it's also working on version 2 for XeSS.

* The XeSS neural network has been trained at 64 samples per pixel - that is 4 times higher pixel count than Nvidia uses for training, Intel says it's goal for XeSS is to be market leading and absolutely beat DLSS

* XeSS runs on any GPU as long as it has Shader model 6.4 or newer and you're using a version of windows that supports it - that means you need an up to date Windows 10 or Windows 11, for Nvidia it means any GPU from Pascal GTX1000 and higher is supported

I do not fully agree with your interpretation. What they said, in order you wrote above is as follows (quotes):
* "There will be XeSS 2.0 at some point, XeSS 3.0 at some point. You know at some point maybe graphics just completely changes and it’s all neural networks." - that does NOT mean they're working on it. They just stated the obvious - things will change in the future and so their tech will have to be adjusted to it. In other part they said it's open source, hence they count on community changing and improving it.
* "So, when NVIDIA says 16k images, I am assuming it translates to the number of samples it has inside a pixel. So from our standpoint, that’s what I can talk about. We train with reference images that have 64 samples per pixel." - Intel assumes here how it works on NVIDIA side of things, as they do not really know, because it's not a public information (sans marketing blabla). They also did not state anywhere they want to beat DLSS or anything like that?
* "So, for DP4a, yes, SM 6.4 and beyond SM 6.6 for example supports DP4a and SM 6.6 also supports these packing intrinsics for extracting 8-bit data and packing 8-bit data. So we recommend SM 6.6." - working on SM 6.4 yes but for best performance SM 6.6 is recommended. For reference, SM 6.4 = Vega and RDNA 1 (AMD); Pascal (NVIDIA). RDNA 2 = 6.5 (AMD). Turing and Ampere = 6.6 (NVIDIA).

In addition, Intel said they will not support tensor cores (or AMD AI acceleration in the future) because they do not intend to optimise their solution for every single architecture. It's open source, devs can do it themselves if they want, but Intel believes it's fast enough with DP4a - which by itself is already a matrix acceleration, as people forget the whole GPU is a matrix accelerator as such, many times faster than CPU in such calculations. Tensor cores in most tests I've seen is only 10-20% faster than DP4a and this seems to be consistent with graphs Intel shown in regard to their hardware acceleration vs DP4a.
That also means that NVIDIA could most likely just let DLSS 2 run on DP4a with barely any speed loss and the whole tensor cores requirement is there just to boost sales of new GPUs. As is typical for them. Perhaps competition from Intel will force them to release it for older GPUs or even other brands as well.
 
Back
Top Bottom