• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Rx 6800 good enough?

My take based solely on my experience with cyberpunk and dlss @ 3440x1440 on my monitor and my oled tv:

- native still "looks" better than dlss set to quality @3440x1440
- dlss quality setting is very good though and much better than any other type of down/up scaling or/and mixes of different types of sharpening through redux/RIS etc.
- performance boost from using dlss @ quality is superb and worth the slight degradation in sharpness/clarity, give me a smoother experience over pin sharp quality
- with ray tracing AND dlss set to "quality", there is some very noticeable blurring in reflections on wet road surfaces (maybe only cyberpunk??)
- some trailing/ghosting with dlss, not sure if this is just with cyberpunk or other games where dlss is used though? However, to most people it probably isn't noticeable, I'm just very sensitive to motion clarity, which is why I prefer my oled screen over 144HZ LCD

Summary and what I prefer:

When playing games on my oled tv, I'll still take using a lower resolutions over dlss etc. as the IQ even at 1920x1080 is superior than native/dlss @ 3440x1440 LCD monitor :D

But if I was using my monitor and my FPS is <80 then I would defo use DLSS.
 
No one cares if the game is actually rendered at a lower resolution, at least, no one should care unless waving epeen, if the game looks as good as it does native which as I say the difference between native and DLSS is so negligible if you sat most people in front of it with DLSS and said, "right this is native, now we'll put the DLSS ON", you then turn DLSS OFF and say "right this is with DLSS ON", you'd catch them out as they wouldn't realise.

So the fact is that the hardware enabling DLSS without the performance cost will allow an additional increase in framerate compared to a GPU without the hardware.

I'm not sure what you're saying, it's obvious that rendering at 1440p rather than 4k is going to cost less, DLSS allow the rendering of a lower resolution and upscaling to a higher resolution, this sort of thing can degrade image quality, but DLSS does the best job of retaining the image quality, to the point HWGAF if it's native or not when the quality is excellent.
What i said is that both cards are capable of upscaling even if the AMD does not have dedicated hardware. So if both were using a similar technology, lets say DLSS, Nvidia will have more FPS depending on the rendering resolution: If you have 200 FPS at 1080p then you will have up to 200 FPS at 4k upscaled from 1080p (or DLSS balanced), if you have 100FPS at 1440p then you get up to 100 FPS at 4k upscaled from 1440p (or DLSS quality) and so on.
Radeon will do something like up to 180FPS at 4k upscaled from 1080p (DLSS balanced) or 80 FPS at 4k upscaled from 1440p(DLSS quality), there is a higher performance cost but it is far better than running the game at native 4k.
 
Since when does it take dedicated hardware to downscale or upscale.. owait it doesn't.
It does help, even if it is doing the upscaling at the same speed (but i think it is doing it faster), that means it will not use the resources needed for rendering. If you use a part of these resources for upscaling it means you will get lower FPS.
 
It does help, even if it is doing the upscaling at the same speed (but i think it is doing it faster), that means it will not use the resources needed for rendering. If you use a part of these resources for upscaling it means you will get lower FPS.
I understand what you are saying but again it isn't needed, it may help a tiny bit but upscaling requires way less hardware than native ofc so the point still stands we don't really need dedicated upscaling hardware. Surely that would actually be a waste of die space.
 
I'm still unsure what you're getting at it seems like you're pointing out the obvious about what DLSS partially does. DLSS is superior to AMD's offering (they don't have one), and AMD don't have dedicated hardware, so even when one comes to the table, no dedicated hardware vs dedicated hardware = performance cost, performance cost = lower performance for the same effect.

I understand what you are saying but again it isn't needed, it may help a tiny bit but upscaling requires way less hardware than native ofc so the point still stands we don't really need dedicated upscaling hardware. Surely that would actually be a waste of die space.

Tell Nvidia that with their dedicated hardware (I think they know what they're doing)
 
Since when does it take dedicated hardware to downscale or upscale.. owait it doesn't.

Yup it doesn't, the point is that it won't be as good as nvidias dlss, will it be better than using native resolution? Yes, no one is denying that.

Lets use cyberpunk as the example here:

5EWL8HX.png

So quality setting is the only example I would use here personally, as anything else noticeably downgrades clarity, at least to my eyes....

So when using dlss in this, you are getting 51.7 fps on average with dips to 45, now if amd can come up with something similar to dlss, taking into account that they don't have the hardware dedicated to this like nvidias gpus... there is a "chance" that they will not see as much of a boost as nvidia i.e. amd might only get somewhere between 35-45 fps average fps and the lows could be down to 30 or even less with their version of dlss.

For myself and I imagine anyone else sensitive to stutter/low fps, that is quite the difference and when at such low fps already, 51.7 feels/looks a lot more playable than 35-45 average fps, not even factoring in the 1 and 0.1% fps....

The only thing that could make amds version a much better appeal with their current cards is if they can have it work on every game through their drivers without the need for game developers to incorporate it.




Again, worth having a read on the google pixel phones and their dedicated processing chip for anything "AI" like, obviously different use case scenarios but it's a fascinating read on googles use of AI and a dedicated hardware.

Just one of the many articles:

https://www.androidauthority.com/google-pixel-4-neural-core-1045318/


If monitors had better scalers like what we have in lgs oled tvs, there wouldn't be as much of a need for dlss etc. though ;)
 
Yup it doesn't, the point is that it won't be as good as nvidias dlss, will it be better than using native resolution? Yes, no one is denying that.

Lets use cyberpunk as the example here:

5EWL8HX.png

So quality setting is the only example I would use here personally, as anything else noticeably downgrades clarity, at least to my eyes....

So when using dlss in this, you are getting 51.7 fps on average with dips to 45, now if amd can come up with something similar to dlss, taking into account that they don't have the hardware dedicated to this like nvidias gpus... there is a "chance" that they will not see as much of a boost as nvidia i.e. amd might only get somewhere between 35-45 fps average fps and the lows could be down to 30 or even less, for myself and I imagine anyone else sensitive to stutter/low fps, that is quite the difference and when at such low fps already, 51.7 feels/looks a lot more playable than 35-45 average fps, not even factoring in the 1 and 0.1% fps....

The only thing that could make amds version a much better appeal with their current cards is if they can have it work on every game through their drivers without the need for game developers to incorporate it.




Again, worth having a read on the google pixel phones and their dedicated processing chip for anything "AI" like, obviously different use case scenarios but it's a fascinating read on googles use of AI and a dedicated hardware.

Just one of the many articles:

https://www.androidauthority.com/google-pixel-4-neural-core-1045318/


If monitors had better scalers like what we have in lgs oled tvs, there wouldn't be as much of a need for dlss etc. though ;)
Pretty sure the Nvidia driver and dgx1 do most of the dlss 2.0 work and not the dedicated RT cores. They usually handle the ray tracing grunt (too simplified?)and a minor part of DLSS 2.0 or did I misunderstand? In any case I'm not disputing hardware doesnt help, you can't replace hardware with software afterall. But the whole dlss requires dedicated GPU hardware is pure nonsense. It also requires external servers too :D
 
Pretty sure the Nvidia driver and dgx1 do most of the dlss 2.0 work and not the dedicated RT cores. They usually handle the ray tracing grunt (too simplified?)and a minor part of DLSS 2.0 or did I misunderstand? In any case I'm not disputing hardware doesnt help, you can't replace hardware with software afterall. But the whole dlss requires dedicated GPU hardware is pure nonsense. It also requires external servers too :D

https://www.nvidia.com/en-gb/geforce/news/nvidia-dlss-2-0-a-big-leap-in-ai-rendering/

Powered by dedicated AI processors on GeForce RTX GPUs called Tensor Cores, DLSS 2.0 is a new and improved deep learning neural network that boosts frame rates while generating beautiful, crisp game images. It gives gamers the performance headroom to maximise ray tracing settings and increase output resolutions.
 
Pretty sure the Nvidia driver and dgx1 do most of the dlss 2.0 work and not the dedicated RT cores. They usually handle the ray tracing grunt (too simplified?)and a minor part of DLSS 2.0 or did I misunderstand? In any case I'm not disputing hardware doesnt help, you can't replace hardware with software afterall. But the who dlss requires dedicated GPU hardware is pure nonsense. It Laos requires external servers too :D
Most of the work is done in the servers but then you need something in the card to run the maths. Of course Nvidia will make you think your videocard is a genius when in fact is dumber than a Voodoo card from the 90s. :D
But the dedicated hardware is specialized in running this type of maths. It can be done without it as it was explained by the Xbox developers. But then you are using resources that are used in rendering so you will lose some FPS.
 
Tensor cores are apart of the equation sure but they aren't doing the work, DGX1 does the work and the driver tells the tensor cores how to handle it. there's no dlss without dgx1 though. Whos to know that tensor cores can't be replaced with traditional cores for upscalling. They are afterall selling property tech for a living.
 
Most of the work is done in the servers but then you need something in the card to run the maths. Of course Nvidia will make you think your videocard is a genius when in fact is dumber than a Voodoo card from the 90s. :D
But the dedicated hardware is specialized in running this type of maths. It can be done without it as it was explained by the Xbox developers. But then you are using resources that are used in rendering so you will lose some FPS.
We all know you don't need a specialized core to run maths. :D:p you are right ofc though it will run a little slowr ofc
 
Last edited:
I'm still unsure what you're getting at it seems like you're pointing out the obvious about what DLSS partially does. DLSS is superior to AMD's offering (they don't have one), and AMD don't have dedicated hardware, so even when one comes to the table, no dedicated hardware vs dedicated hardware = performance cost, performance cost = lower performance for the same effect.


Tell Nvidia that with their dedicated hardware (I think they know what they're doing)

Should be fairly straight forward to understand that we have been upscaling for over 20 years now without dedicated hardware.
According to Nvidia they invented the GPU, but that's a whole other can of worms... I think you missed the fact that i was merely trying to be funny, since I find it funny people think you need specialized cores for upscaling and for downscaling. :D:p

Upscaling require less grunt than native... How can it require more to provide less.
 
Last edited:
Should be fairly straight forward to understand that we have been upscaling for over 20 years now without dedicated hardware.
According to Nvidia they invented the GPU, but that's a whole other can of worms... I think you missed the fact that i was merely trying to be funny, since I find it funny people think you need specialized cores for upscaling and for downscaling. :D:p

Upscaling require less grunt than native... How can it require more to provide less.

I think Nvidia know more about this than yourselves. You seem to be in denial.
 
I think Nvidia know more about this than yourselves. You seem to be in denial.
There's only 1 of me. Denial about what? I think dlss 2.0 is a great feature but I'm not in denial that it couldn't be done without hardware. as I said I've been upscaling for over 20 years roughly now. Corps love buzz word alike ai and cloud etc helps them sell more.;):)
 
Tell Nvidia that with their dedicated hardware (I think they know what they're doing)
This is another discussion: do you use part of the chip for dedicated hardware or do you put more compute units for native rendering and use them for upscaling?
Nvidia bet on dedicated hardware but that is also because you can manipulate things with dedicated hardware: sponsor games that will only run decent at 1080p unless upscaled or use a denoiser that is made to only work well with your dedicated hardware in RT games. You can do a lot of things when you have a proprietary solution and a big wallet. Everything will change once Direct ML becomes a standard but for now, Nvidia has a big advantage in gaming and they know how to use it, even against their own clients (those who own a Pascal card or even a weaker Turing card ).
 
Also, have a look at madvr for MPC HC and see how much it hammers GPUs with certain scaling algorithms ;) Just because it can be "used" without dedicated/correct hardware doesn't mean you should use the best/hardest hitting scaling algorithm otherwise the frame latency for media playback jumps to 20+ms with frame drops/skipping....
 
Also, have a look at madvr for MPC HC and see how much it hammers GPUs with certain scaling algorithms ;) Just because it can be "used" without dedicated/correct hardware doesn't mean you should use the best/hardest hitting scaling algorithm otherwise the frame latency for media playback jumps to 20+ms with frame drops/skipping....
Cheers nexus will check it out, thanks for the Google link too. You are indeed correct, but untill we have some competition we dont know to what extent yet. I'm a bit more skeptical of the Nvidia dedicated requirement claim obviously. But we shall see ofc.
 
Back
Top Bottom