• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

Caporegime
Joined
18 Oct 2002
Posts
39,405
Location
Ireland


Honestly he's posted the same thing numerous times and the discussion always goes the same way. Saying that rdna 2 is a "stripped down, budget console chip" then when it's pointed out that it goes toe to toe with nvidias best in raster performance he falls back on the ray tracing waffle. It's like a merry go round of the same gibberish over and over.
 
Permabanned
Joined
31 Aug 2013
Posts
3,364
Location
Scotland
No I meant RT. Since the only reason for dlss/far realistically is for extra RT performance. Or do we really need 300plus FPS on most titles with dlss without RT? Your talking about how console or amd cards are stripped down what has that got to to do with dlss?

As someone who only targets 1440p/60 I still turn on DLSS as it reduces power/heat output, while providing excellent AA. Death Stranding being a good example. I also look forward to its use alongside VR. I also appreciate those who are running 4k+ who want to maintain 100+ FPS without turning down details.

I have already mentioned that Nvidia run DLSS through tensor cores, thus not reducing the render performance of the main GPU. They also do this with RT running through dedicated RT cores, hence the huge performance lead. AMD do not have dedicated cores for FSR hence expectations are not high as mentioned earlier.
 
Associate
Joined
6 Dec 2013
Posts
1,881
Location
Nottingham
As someone who only targets 1440p/60 I still turn on DLSS as it reduces power/heat output, while providing excellent AA. Death Stranding being a good example. I also look forward to its use alongside VR. I also appreciate those who are running 4k+ who want to maintain 100+ FPS without turning down details.

I have already mentioned that Nvidia run DLSS through tensor cores, thus not reducing the render performance of the main GPU. They also do this with RT running through dedicated RT cores, hence the huge performance lead. AMD do not have dedicated cores for FSR hence expectations are not high as mentioned earlier.
great, very nice that you get a use out of it
Your telling me like I don't know how it works. I know how Nvidia implimentation works thank you. Your speculation about FSR is just that until we SEE it. Dedicated cores are not required for RT or scaling as proven by both Nvidia and AMD FOR THR 100TH time.
 
Associate
Joined
20 Nov 2020
Posts
1,120
Ideally you don't want dedicated hardware inside your chip for anything. My bet is that in 3 generations we won't even have dedicated hardware for RT. It is a handicap to use a part of the chip for something that is not used all the time instead of using more compute units that work all the time.
That does not mean that the tensor cores in Ampere are a disadvantage. Since the performance is equal to Big Navi, they are an advantage. But an Ampere with more CU's instead of tensor cores, would have been even a better option.
I said before that AMD is giving us less hardware for almost the same amount of money and it is true. I am not saying that Big Navi cards are bad or are " console chips" like our friend calls them. They are pretty much a big leap compared to the older AMD generation and they can handle almost every game you throw at them, as long as it is not sponsored by Nvidia. There is nothing new under the sun. :D

The problem with RT and every tech Nvidia were promoting heavily is that if you focus too much on graphics, you can miss other important things a game will need to be a great game. And then you have Days Gone which was not the most awesome game on PS4 in any way and is coming to PC and it has better ratings than any RT game made so far. This should tell us something about the quality of PC games.
 
Caporegime
Joined
12 Jul 2007
Posts
40,780
Location
United Kingdom
Honestly he's posted the same thing numerous times and the discussion always goes the same way. Saying that rdna 2 is a "stripped down, budget console chip" then when it's pointed out that it goes toe to toe with nvidias best in raster performance he falls back on the ray tracing waffle. It's like a merry go round of the same gibberish over and over.
It gives me a good chuckle every time i read it. :p
 
Permabanned
Joined
31 Aug 2013
Posts
3,364
Location
Scotland
great, very nice that you get a use out of it
Your telling me like I don't know how it works. I know how Nvidia implimentation works thank you. Your speculation about FSR is just that until we SEE it. Dedicated cores are not required for RT or scaling as proven by both Nvidia and AMD FOR THR 100TH time.

I'm telling you how you can benifit from DLSS as you were under the impression it was only useful when RT is being used.

This whole thread is speculation. My original input was to correct a missunderstanding that DLSS caused input lag.

I never once said that RT required dedicated cores, or even dedicated hardware. Indeed I had my own attempt at raytracing ~22 years ago using a86 and Turbo C. What I am saying is that dedicated cores will perfrom far better as they can run in parallel with other normal GPU workloads, for the 1,000th time :D
 
Associate
Joined
6 Dec 2013
Posts
1,881
Location
Nottingham
I'm telling you how you can benifit from DLSS as you were under the impression it was only useful when RT is being used.

This whole thread is speculation. My original input was to correct a missunderstanding that DLSS caused input lag.

I never once said that RT required dedicated cores, or even dedicated hardware. Indeed I had my own attempt at raytracing ~22 years ago using a86 and Turbo C. What I am saying is that dedicated cores will perfrom far better as they can run in parallel with other normal GPU workloads, for the 1,000th time :D
So your saying a 3070/3080 can't push 1440/60, it requires dlss to do so?
Indeed it is speculation, but there is speculation and simply spreading fudd. The whole cut down stuff is just fudd.

And as to your last paragraph

"I have already mentioned that Nvidia run DLSS through tensor cores, thus not reducing the render performance of the main GPU. They also do this with RT running through dedicated RT cores, hence the huge performance lead. AMD do not have dedicated cores for FSR hence expectations are not high as mentioned earlier."

That's you word for word 3 or 4 posts up. Dedicated cores Vs non dedicated cores, how do you know which will work better until we see it? You may be right, but you may also be wrong too. Your talking in absolute terms that dedicated cores are better.
 
Soldato
OP
Joined
6 Feb 2019
Posts
17,750
I'm telling you how you can benifit from DLSS as you were under the impression it was only useful when RT is being used.

This whole thread is speculation. My original input was to correct a missunderstanding that DLSS caused input lag.

I never once said that RT required dedicated cores, or even dedicated hardware. Indeed I had my own attempt at raytracing ~22 years ago using a86 and Turbo C. What I am saying is that dedicated cores will perfrom far better as they can run in parallel with other normal GPU workloads, for the 1,000th time :D


AMD still needs to work on the parallel part, current they don't run in parallel and it's one after another to fill the frame buffer, Nvidia on the other hand does parallel ray tracing and rasterisation
 
Associate
Joined
6 Dec 2013
Posts
1,881
Location
Nottingham
AMD still needs to work on the parallel part, current they don't run in parallel and it's one after another to fill the frame buffer, Nvidia on the other hand does parallel ray tracing and rasterisation
I can't find any evidence supporting this, can you provide any articles etc ? Might actually make some interesting reading, most of the stuff I see online doesn't go into that level off detail.
 
Associate
Joined
20 Nov 2020
Posts
1,120
I can't find any evidence supporting this, can you provide any articles etc ? Might actually make some interesting reading, most of the stuff I see online doesn't go into that level off detail.
Ampere is like a T34 while Big Navi is like a Panzer. Unfortunately the T34 won the WWII. :D
This is the difference, AMD is like a tank made in the West, it is built to be efficient but Ampere is built like a russian tank => it will reach Berlin no matter how many resources are spent. :)
 
Man of Honour
Joined
25 Oct 2002
Posts
31,772
Location
Hampshire
Am I being thick or is is this just a way to slightly improve visual quality on low resolution monitors? i.e. say you render at 4k then downscale to a 1080p you end up with 4k performance levels and graphics that are somewhere between 1080p and 4k quality? I must be missing something because it doesn't seem sound that amazing because presumably you would be better off just running on a higher resolution screen anyway? Like if you've bought a system that can push 4k resolution at good frames you should be able to pair a decent monitor to go with it.

It's badged as a DLSS rival but from what I've read DLSS can actually improve performance at a given resolution, not make it worse.

This may sound like a rhetoric post but I'm genuinely confused on this, I can't see how this would be getting any hype if it was as I understand - what's the craic folks, what's the missing piece of the puzzle?
 
Associate
Joined
20 Nov 2020
Posts
1,120
Am I being thick or is is this just a way to slightly improve visual quality on low resolution monitors? i.e. say you render at 4k then downscale to a 1080p you end up with 4k performance levels and graphics that are somewhere between 1080p and 4k quality? I must be missing something because it doesn't seem sound that amazing because presumably you would be better off just running on a higher resolution screen anyway?

It's badged as a DLSS rival but from what I've read DLSS can actually improve performance at a given resolution, not make it worse.

This may sound like a rhetoric post but I'm genuinely confused on this, I can't see how this would be getting any hype if it was as I understand - what's the craic folks, what's the missing piece of the puzzle?
No, in theory both techs will render at lower res and display higher res with improved FPS and minimal quality degradation.
For example with DLSS you render the game at 1080p where you get 100 FPS and then display the game at 4k/100FPS instead of 4k/40 FPS like you would have for native 4k.
 
Permabanned
Joined
31 Aug 2013
Posts
3,364
Location
Scotland
So your saying a 3070/3080 can't push 1440/60, it requires dlss to do so?

No, I was telling you how you can benefit from using DLSS.

Indeed it is speculation, but there is speculation and simply spreading fudd. The whole cut down stuff is just fudd.

For me it's obvious, AMD knew where Nvidia were going with both DLSS and raytracing, but instead chose to focus on console budgets. Prove me wrong.

Indeed it is speculation, but there is speculation and simply spreading fudd. The whole cut down stuff is just fudd.

And as to your last paragraph

"I have already mentioned that Nvidia run DLSS through tensor cores, thus not reducing the render performance of the main GPU. They also do this with RT running through dedicated RT cores, hence the huge performance lead. AMD do not have dedicated cores for FSR hence expectations are not high as mentioned earlier."

That's you word for word 3 or 4 posts up. Dedicated cores Vs non dedicated cores, how do you know which will work better until we see it? You may be right, but you may also be wrong too. Your talking in absolute terms that dedicated cores are better.

It's true AMD could render at a lower source resolution than DLSS and then using an upscaling algorithm produce a better image than DLSS in a shorter period of time without disrupting any of the rendering of the next frame, but my 40years of coding tells me that's a long shot...

You on the other hand would like to argue otherwise because?
 
Man of Honour
Joined
25 Oct 2002
Posts
31,772
Location
Hampshire
No, in theory both techs will render at lower res and display higher res with improved FPS and minimal quality degradation.
For example with DLSS you render the game at 1080p where you get 100 FPS and then display the game at 4k/100FPS instead of 4k/40 FPS like you would have for native 4k.
OK, if Super Resolution can do the same that makes more sense, so you could render at 720p and upscale to 1440p to get better performance than native 1440p on an RX480 or whatever.
 
Soldato
Joined
24 Oct 2005
Posts
16,297
Location
North East
Well hopefully more to it than that cos at the mo you could set 720p ingame and use amd integer scaling to effectively upscale from 720 to 1440p or 1080p to 4k and use ris to sharpen the lower res image to look better at 1440p/4k, but its not great for 720/1440p it would be better used if using 1080/4k method tho i think. But im hoping FSR will be a much better method than that and be more like dlss ish.
 
Last edited:
Associate
Joined
6 Dec 2013
Posts
1,881
Location
Nottingham
No, I was telling you how you can benefit from using DLSS.



For me it's obvious, AMD knew where Nvidia were going with both DLSS and raytracing, but instead chose to focus on console budgets. Prove me wrong.



It's true AMD could render at a lower source resolution than DLSS and then using an upscaling algorithm produce a better image than DLSS in a shorter period of time without disrupting any of the rendering of the next frame, but my 40years of coding tells me that's a long shot...

You on the other hand would like to argue otherwise because?
because again your jumping to conclusions and my experience managing developers/coders tells me they can get it wrong often.
 
Last edited:
Associate
Joined
1 Oct 2009
Posts
1,033
Location
Norwich, UK
Ideally you don't want dedicated hardware inside your chip for anything. My bet is that in 3 generations we won't even have dedicated hardware for RT. It is a handicap to use a part of the chip for something that is not used all the time instead of using more compute units that work all the time.
That does not mean that the tensor cores in Ampere are a disadvantage. Since the performance is equal to Big Navi, they are an advantage. But an Ampere with more CU's instead of tensor cores, would have been even a better option.
I said before that AMD is giving us less hardware for almost the same amount of money and it is true. I am not saying that Big Navi cards are bad or are " console chips" like our friend calls them. They are pretty much a big leap compared to the older AMD generation and they can handle almost every game you throw at them, as long as it is not sponsored by Nvidia. There is nothing new under the sun. :D

The problem with RT and every tech Nvidia were promoting heavily is that if you focus too much on graphics, you can miss other important things a game will need to be a great game. And then you have Days Gone which was not the most awesome game on PS4 in any way and is coming to PC and it has better ratings than any RT game made so far. This should tell us something about the quality of PC games.

I don't think that's ideal at all, in fact pretty much nowhere do we do that even with modern general purpose CPUs. Modern CPUs contain all sorts of hardware specific acceleration for different instructions rather than the whole CPU being used for general purpose math. The point is that hardware acceleration can be orders of magnitude faster than general purpose computation that 3 generations of CPU improvement in general purpose compute can be essentially irrelevant in bridging the performance gap you're talking about. One of the corollaries of that is any dedicated hardware often only needs to be a tiny fraction of the total area on a chip that it has almost no impact to the general purpose compute speed.

It will always come down to use however, GPUs and CPUs are targeted for the average consumer and so have ratios of dedicated hardware compute and general purpose compute that make sense for the average consumer and how that is a trade off. For example Nvidia dedicating more chip area towards RT and Tensor cores doesn't seem to have impeded their ability to compete at rasterization for the average consumer. In fact that's in large part due to the fact that most consumers are still using 1080p and we now have many times more rasterization power than we need to drive that, so going pure rasterization and chasing high frame rates is really the only place AMD can boast a win this round. Exceptionally high frame rates at 1080-1440p, the vast majority of gamers cannot appreciate on 60hz monitors and so is largely irrelevent. Nvidia gambled that gamers would look at GPUs and decide the extra frame rates for rasterization isn't a meaningful upgrade for them, but new graphical features like RT might be. I also expect that future Nvidia generations will move towards increasing the area dedicated to RT to increase at a faster rate than that for rasterization. And AMD will confirm this move by themselves doing the same.

This is all relative anyway and anyone in the hardware and gaming sphere for like 20+ years will know this pattern intimately. New things come along all the time and hardware needs to be dedicated to them and there's always this dumb debate if that's worth it or not, and eventually the new thing becomes the old thing and we start the whole debate over again with the new-new-thing. It happened with the move from 2D to 3D rendering, back then 3d acceleration was a whole new piece of hardware on top of your 2d accelerator. Then when T&L came along we argued about it, then with varying new progressions in DirectX, then in increased floating point precision, hardware optimizations for MSAA and other AA types, then investment into shader power, and now with RT. Once RT is normal and some new fangled thing comes along people will argue that as well and everyone will move onto accept RT inclusion as the new normal. This is all very normal and predictable.
 
Last edited:
Back
Top Bottom