• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Nvidia were in OG Xbox and PS3. Then MS and Sony vowed never to work with them again. Now they are also in the Switch.


Only reason NVIDIA were in the PS3 was Sony DRASTICALLY overrated what the CELL CPU was capable of. Originally the CELL was gonna be handling CPU tasks as well as gpu and audio. Then the crack wore off and they had to go running to NVIDIA late in the game to get a modified geforce 7000 series.

If you believed the Sony hype CELL was the be all end all, but it never even remotely worked out that way due to being a nightmare to code for. Very few games ever really made use of the systems potential. Naughty Dog and the Uncharted games were one of the few devs\franchises that did get to grips with it and showed what it could do.

Afaik it was only MS that basically swore never to work with NVIDIA again as they refused to budge on pricing, don't think Sony really has any beef with them
 
Yes there was. It was licensing and royalty fees. Nvidia wanted kickbacks and some other nonsense back then. Don't remember all of it. But from what I recall it came off as Nvidia wanting ownership of Sony.
 
Last edited:
Only reason NVIDIA were in the PS3 was Sony DRASTICALLY overrated what the CELL CPU was capable of. Originally the CELL was gonna be handling CPU tasks as well as gpu and audio. Then the crack wore off and they had to go running to NVIDIA late in the game to get a modified geforce 7000 series.

If you believed the Sony hype CELL was the be all end all, but it never even remotely worked out that way due to being a nightmare to code for. Very few games ever really made use of the systems potential. Naughty Dog and the Uncharted games were one of the few devs\franchises that did get to grips with it and showed what it could do.

Afaik it was only MS that basically swore never to work with NVIDIA again as they refused to budge on pricing, don't think Sony really has any beef with them

I can't see there being any beef - after all we have evidence that prior to the PS5 announcement Nvidia was still sending engineers to Sony HQ to talk business proposals - obviously AMD won, probably due to Nvidia being notorious for not budging on price and of course its just easier to use a single APU than a CPU+GPU from different vendors

https://twitter.com/anji_nl/status/1214275521644679168/photo/1

I do wonder though why Nintendo went with Nvidia - it may have something to do with the fact that Nintendo has the best margins in the business, they make crazy amount of money on their hardware and software so could afford Nvidia's asking price.
 
Only reason NVIDIA were in the PS3 was Sony DRASTICALLY overrated what the CELL CPU was capable of. Originally the CELL was gonna be handling CPU tasks as well as gpu and audio. Then the crack wore off and they had to go running to NVIDIA late in the game to get a modified geforce 7000 series.

If you believed the Sony hype CELL was the be all end all, but it never even remotely worked out that way due to being a nightmare to code for. Very few games ever really made use of the systems potential. Naughty Dog and the Uncharted games were one of the few devs\franchises that did get to grips with it and showed what it could do.

Afaik it was only MS that basically swore never to work with NVIDIA again as they refused to budge on pricing, don't think Sony really has any beef with them

There were laptop issues with bumpgate and there were issues with pricing when it came to die shrinks but I am going from memory.
 
I can't see there being any beef - after all we have evidence that prior to the PS5 announcement Nvidia was still sending engineers to Sony HQ to talk business proposals - obviously AMD won, probably due to Nvidia being notorious for not budging on price and of course its just easier to use a single APU than a CPU+GPU from different vendors

https://twitter.com/anji_nl/status/1214275521644679168/photo/1

I do wonder though why Nintendo went with Nvidia - it may have something to do with the fact that Nintendo has the best margins in the business, they make crazy amount of money on their hardware and software so could afford Nvidia's asking price.


Yup, I think that's probably right. NVIDIA would have thrown their sales/tech teams at it, after all "it's not over to the fat lady sings" but I would imagine AMDs offering was just too compelling and price would have been a big factor. Console hardware tends to be subsidised with Sony/MS clawing back money via licensing for games etc. An NVIDIA GPU would have been a huge % of the cost of the hardware and then they'd be looking for a CPU which then means more cost in terms on manufacture etc.

While I'm sure NVIDIA would have loved to have won the business I do think it would have eroded it's USP of perceived bleeding edge performance/features with a price tag to match! If you could buy the latest NVIDIA GPU in a subsidies gaming box it would have eroded some of it's potential PC sales.
 
Last edited:
Distant features should not be sharp, so if DLSS is making thin cables and fences placed hundreds of meters away from the camera then DLSS is killing the realism of the image.

I like that type of sharp scene and imagery in games though, because if you look into the distance irl your focus adjusts but techniques like depth blurring take that away it assumes you are looking at something close up most of theme.

DLSS though has a **** poor adoption rate so I question its value. If I can't use it in the majority of games I play then its not particularly useful they need a way to back port DLSS to older titles and Early Access that doesn't involve developer intervention before they can start chirping about it to consumers. By all means go sell it to developers but if I can't use it its just smoke and mirrors I'm afraid.
 
No, it doesn't, that's my point.
Are you not actually thinking of Radeon Image Sharpening? The 5700 XT launch went into detail about RIS is a setting in the driver which works everywhere, but FidelityFX is a toolset that requires developer integration.

So if you're using something on every game then you're using Radeon Image Sharpening, not FidelityFX.
 
Why are people still talking about sharpness? DLSS isn't "sharpening". Yeah you go right ahead as someone suggested, get an image and sharpen it in photoshop or in-game with any of the post-processing filters available (Reshade / Freestyle etc), the edges will start showing artifacts and get even more jaggy than before. They won't add pixel information that isn't there due to the lack in resolution, unlike DLSS. DLSS upscales the lower res image to native res (4k in this case) while also reconstructing and adding pixels as needed via AI. No hard edges and close to perfect anti-aliasing. In motion it's even better cause the whole image is stable.

And 2 issues, referring to the digital foundry video. @Chuk_Chuk

1) Why didn't they compare it to native 4k without AA. Because it's pointless. I compared everything myself. Native 4k without TXAA is worse than all the other methods, the pixel crawling / jaggies is horrid in motion with all the hard edges in the game (buildings and vegetation). Sure it's slightly sharper, but that's it. Worse in every other way. 4k without AA vs DLSS is almost the same, has an advantage when it comes to particles but loses in every other case.
2) Why FFX was run at a lower res. It wasn't. That's how it works. DLSS also runs at a lower res and reconstructs the 4k image. FFX just upscales with some sharpening. Have the patience to watch the whole thing and understand what's going on before throwing accusations around as if DF are frauds or whatever.

It's like first some argue that "sharper is better, aka native / FFX" then "DLSS is just sharper but sharper isn't better cause it's unnatural". It's just arguing in bad faith at this point.
 
Last edited:
I had indeed forgotten about that, i also thought that it was scrapped or was that just the crap they tried to pull with AIB partners? Is there a list of who else has signed this?
IIRC, almost all the review sites signed the NDA. They felt they had no choice.

Otherwise, they say, they get to review things a couple weeks after everybody else.

The fact that your review can then be unbiased doesn't seem to matter, at least not as much as getting all those clicks from being first to the party.
 
You do realise a lot of the "native" results are using poor forms of AA,which blurs the image. What DLSS is doing is not using blurry AA methods,and instead is using sharpening algorithms. If you don't believe me,go into PS,and look at a native dSLR jpeg which is soft and try using USM,and it suddenly looks "better". Sharpening is basically using edge detection,to find edges and then apply a steep local contrast gradient.

Why are people still talking about sharpness? DLSS isn't "sharpening". Yeah you go right ahead as someone suggested, get an image and sharpen it in photoshop or in-game with any of the post-processing filters available (Reshade / Freestyle etc), the edges will start showing artifacts and get even more jaggy than before. They won't add pixel information that isn't there due to the lack in resolution, unlike DLSS. DLSS upscales the lower res image to native res (4k in this case) while also reconstructing and adding pixels as needed via AI. No hard edges and close to perfect anti-aliasing. In motion it's even better cause the whole image is stable.

And 2 issues, referring to the digital foundry video. @Chuk_Chuk

1) Why didn't they compare it to native 4k without AA. Because it's pointless. I compared everything myself. Native 4k without TXAA is worse than all the other methods, the pixel crawling / jaggies is horrid in motion with all the hard edges in the game (buildings and vegetation). Sure it's slightly sharper, but that's it. Worse in every other way. 4k without AA vs DLSS is almost the same, has an advantage when it comes to particles but loses in every other case.
2) Why FFX was run at a lower res. It wasn't. That's how it works. DLSS also runs at a lower res and reconstructs the 4k image. FFX just upscales with some sharpening. Have the patience to watch the whole thing and understand what's going on before throwing accusations around as if DF are frauds or whatever.

It's like first some argue that "sharper is better, aka native / FFX" then "DLSS is just sharper but sharper isn't better cause it's unnatural". It's just arguing in bad faith at this point.
You are on purpose misrepresenting what I am saying. I said DLSS USES sharpening....I NEVER said it WAS 100% sharpening.

I know mates who worked with machine learning(and computer vision too). So I actually have seen how they had to "train" systems to make them work better.The fact is it isn't AI,its a machine learning approach using an aproximated ANN. There basically is a target image,which is the original high resolution image and a lower resolution input image. So you train the ANN so it can get as close to the target image. There is no such thing as "better image quality" - that is not how the ANN works in this particular use case as its mathematically impossible. What Nvidia is doing with DLSS2 is training for certain scenarios in a general way(so think a game scene with cloud,or another with grass),so for each scenario,they can at best get an approximate generalised upscaling algorithm per scene type.

You also don't seem to appreciate generic upscaling is there PS and other image processing applications. Most dSLR output for example is generally very soft looking compared to that from a phone or compact. Try upscaling a low complexity image in PS,add sharpening/USM and compare it to a native RAW,processed with zero sharpening(or even a slight blur filter on top) - it looks better. This is the same sort of thing smartphones do - so they look really nice and poppy compared to a dSLR/ILC out of the camera.

This is because it tends to not have not much sharpening - again what is sharpening? It uses edge detection to massively increase local contrast. Human vision is very perceptive of "edges" made in this way.

Lots of the "native" images being used in DLSS comparisons,usually have ****** AA methods which soften the image. These were made for consoles,because they outputted low resolution images with lots of aliasing,so these AA methods softened the image and made them less garish. These were also computationally less expensive than MSAA,etc. DLSS works in multiple steps - firstly is the actual upscaling step,then there is probably some form of AA applied and other step is sharpening.
 
Last edited:
You do realise a lot of the "native" results are using poor forms of AA,which blurs the image. What DLSS is doing is not using blurry AA methods,and instead is using sharpening algorithms. If you don't believe me,go into PS,and look at a native dSLR jpeg which is soft and try using USM,and it suddenly looks "better". Sharpening is basically using edge detection,to find edges and then apply a steep local contrast gradient.

Just after 14 minutes he compares 4k native, fxaa and taa - the native has noticeable jaggies. Given that he recommends 4k with TAA as the best native solution does it not then follow that it makes sense for him compare DLSS vs his recommended native settings? All screens where appropriate are clearly labelled 4K + TAA so there is no deception going on.


Just watched the section from 15 mins and it just seems like an AA comparison. Native was always with TAA and the issues he points to seem to me to be issues with TAA. If the argument is that DLSS is a new form AA then thats a different discussion to be had. Why did he never do a comparison to straight 4K without TAA? It seems odd to not do that.

You just missed that part where you started - there's a comparison just after the 14 minute mark.... here's a crappy screen cap:

cvSOvJG.png



Why was contrast adaptive sharpening run at lower than 4k res in his test (from my understanding of his statement just before he begins to discuss CAS)? CAS is supposed to fix the issues of TAA according to the AMD website so it would have been interesting to see it run at native 4K.

I believe that's just how CAS works in this game at least, he said it automatically sets it to about 75%. I guess it's a bit like DLSS in the sense that if you are playing at 4k output you don't have a choice it just sets the source as 1440p.


As mentioned before I'm personally very excited to see what AMD can bring to the table with eg directML... I think this sort of technology is fundamentally important in driving forward high resolution high refresh rate displays for VR, at least until dynamic eye tracked foveated rendering is a solved and affordable solution. I'm not really interested in whether it's microscopically better or worse... if it can deliver even a close to native experience at high resolutions and high refresh rates it's exactly what I need to push the new G2 I'm getting in September and no doubt even higher resolution headsets in coming years.

IF rumours about DLSS 3.0 working inherently on any game that uses TAA are true, then I really hope AMD has something competitive up their sleeves.
 
You are on purpose misrepresenting what I am saying. I said DLSS USES sharpening....I NEVER said it WAS 100% sharpening.

I know mates who worked with machine learning(and computer vision too). So I actually have seen how they had to "train" systems to make them work better.The fact is it isn't AI,its a machine learning approach using an aproximated ANN. There basically is a target image,which is the original high resolution image and a lower resolution input image. So you train the ANN so it can get as close to the target image. There is no such thing as "better image quality" - that is not how the ANN works in this particular use case as its mathematically impossible. What Nvidia is doing with DLSS2 is training for certain scenarios in a general way(so think a game scene with cloud,or another with grass),so for each scenario,they can at best get an approximate generalised upscaling algorithm per scene type.

You also don't seem to appreciate generic upscaling is there PS and other image processing applications. Most dSLR output for example is generally very soft looking compared to that from a phone or compact. Try upscaling a low complexity image in PS,add sharpening/USM and compare it to a native RAW,processed with zero sharpening(or even a slight blur filter on top) - it looks better. This is the same sort of thing smartphones do - so they look really nice and poppy compared to a dSLR/ILC out of the camera.

This is because it tends to not have not much sharpening - again what is sharpening? It uses edge detection to massive increase local contrast. Human vision is very perceptive of "edges" made in this way.

Lots of the "native" images being used in DLSS comparisons,usually have ****** AA methods which soften the image. These were made for consoles,because they outputted low resolution images with lots of aliasing,so these AA methods softened the image and made them less garish. These were also computationally less expensive than MSAA,etc. DLSS works in multiple steps - firstly is the actual upscaling step,then there is probably some form of AA applied and other step is sharpening.

Just after 14 minutes he compares 4k native, fxaa and taa - the native has noticeable jaggies. Given that he recommends 4k with TAA as the best native solution does it not then follow that it makes sense for him compare DLSS vs his recommended native settings? All screens where appropriate are clearly labelled 4K + TAA so there is no deception going on.
Read what I said fully - you really need to think about how DLSS is working. It already has AA and sharpening added to the image as part of the processing step.The ANN at best can only output the same image as the target.

So the target Nvidia sets in training the ANN per scene is that native image.

Plus please stop saying "better than native" - this is not how ANNs work. Is it better than a native image with the best forms of AA? I doubt it - at best maybe close. Also the problem with upscaling has always been artifacts within the body of the image outside edges.

Edit!!

That is the whole point I made - you are focussing on edges,because human eyesight is very perceptive of these things. This is why sharpening and USM exists in PS and other software.

Back in the film days you didn't have these options. This is why lower resolution digital files looked better than higher resolution film,because of the "softness" due to grain and the ability of digital to use "sharpening" on edges.

So when you add AA+sharpening,it looks better even if the image isn't actually any better.

Which is why consumer camera pictures can look better than unprocessed dSLR/ILC images out of the camera. The resolution is lower,the lenses can't resolve enough,but they look better as they are more sharpened out of the camera.

Second Edit!!

Anyway,I don't disagree this kind of image upscaling is a useful feature,but the whole better than native thing seems more like a marketing tagline,because IMHO its a bit misleading.

In the end AMD and Nvidia are only doing this as they are selling GPUs with insufficient performance for a lot of money. They can both play around with image quality to boost their benchmark scores,and AMD will do the same as Nvidia. They will be no better in that regard(as its happened before with ATI).
 
Last edited:
I can't see there being any beef - after all we have evidence that prior to the PS5 announcement Nvidia was still sending engineers to Sony HQ to talk business proposals - obviously AMD won, probably due to Nvidia being notorious for not budging on price and of course its just easier to use a single APU than a CPU+GPU from different vendors

https://twitter.com/anji_nl/status/1214275521644679168/photo/1

I do wonder though why Nintendo went with Nvidia - it may have something to do with the fact that Nintendo has the best margins in the business, they make crazy amount of money on their hardware and software so could afford Nvidia's asking price.

Amd didn't have anything at the time of the switch to compete with Tegra. Ryzen was just becoming a thing and consoles next gen wasn't even talked about.
It's only recently that apus are made way to mobile.
 
Amd didn't have anything at the time of the switch to compete with Tegra. Ryzen was just becoming a thing and consoles next gen wasn't even talked about.
It's only recently that apus are made way to mobile.

Apparently Nvidia offered them a good deal,as the Tegra chip used wasn't even the latest one. The Tegra X1 was a late 2015 era chip made on TSMC 20NM which is a planar bulk process(no FinFETs). By the time the Switch was launched in 2017 it was a relatively cheap process node,which wasn't in big demand. The Cortex A50 series CPUs in the Tegra X1 were designed in 2012. By then Nvidia had the better Tegra X2 with Denver cores and a Pascal based IGP.
 
Last edited:
Only reason NVIDIA were in the PS3 was Sony DRASTICALLY overrated what the CELL CPU was capable of. Originally the CELL was gonna be handling CPU tasks as well as gpu and audio. Then the crack wore off and they had to go running to NVIDIA late in the game to get a modified geforce 7000 series.

If you believed the Sony hype CELL was the be all end all, but it never even remotely worked out that way due to being a nightmare to code for. Very few games ever really made use of the systems potential. Naughty Dog and the Uncharted games were one of the few devs\franchises that did get to grips with it and showed what it could do.

I remember it well. I personally thought after reading a whitepaper not that long ago it was the lack of people able to develop on it to get the strengths in its potential. It wouldn't have been popular especially if the people in the industry know its going to be dropped like a sack of spuds further down the line (which it clearly was). So I think it was not overrated, more like too steep (to perfect) and ahead of its time.
 


So, 120CU on their Pro Cards confirmed. That was rumored a while back. It appears it won't be a gamer card after all. I still wonder what kind of performance this monster card has though as a point of reference.

Another interesting tidbit is that it's only 300watts with 32GB of ECC vram @ 1.2 TB per second bandwidth!!!! Impressive coming from Radeon.
 
I remember it well. I personally thought after reading a whitepaper not that long ago it was the lack of people able to develop on it to get the strengths in its potential. It wouldn't have been popular especially if the people in the industry know its going to be dropped like a sack of spuds further down the line (which it clearly was). So I think it was not overrated, more like too steep (to perfect) and ahead of its time.

There seems to be some idealism when it comes to game engines as well - there is and always will be some core parts of a game engine and gameplay logic that doesn't take advantage of multiple threads or cell type architectures and that won't change no matter the wishful thinking of some. You can enhance a game by adding additional features that take advantage of multiple cores and so on such as advanced AI, physics, more complex rendering/visual features and so on but you always have to get the core of the engine and core of the gameplay mechanics that works well in the first place.
 
Status
Not open for further replies.
Back
Top Bottom