• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fidelity Super Resolution in 2021

Caporegime
Joined
18 Oct 2002
Posts
32,618
I don't think that this part of the marketing is wrong ( that is able to accelerate tensor math ). If it was just AMD maybe it was not true but this also has MS and Sony behind. Microsoft said the same thing that they added a lot of hardware for ML.

I am not saying that is better the way AMD did it, it would have been better if they were adding 20% more CU inside the card. I already said in the past that Nvidia packs more hardware inside the chipset than AMD does so the tensor cores are something extra. If the native performance is the same, it means the tensor cores are extra so AMD is a worse solution.
But i think it is better to have more CU's that can also do the tensor's work when needed, than to have tensor cores that do nothing when you don't use upscaling.


I never said the marketing is wrong. I said you are misunderstanding it and making a false equivalency between tensor cores and some additional ops AMD added in RDNA2.

Adding more CUs in itslef provably does not give you much performance increase, as shown by the massive CUDA performance increase in Ampere compared with the more modest gaming performance increase.

Specialised hardware will always outperform more generic. In this case, tensor cores are hundreds of times faster than CUs
 
Caporegime
Joined
18 Oct 2002
Posts
32,618
Tbh I would like some actual "proof" that tensor cores actually doing actual work with current DLSS 2.1 games. It feels like... they're just there. Do they not consume power? If so how much? Don't they have any kind of "utilization" metric that we can follow? If they do, why are they not exposed to hardware monitor tools? Why Nvidia's GPUs feel like a black box?


If you use Nvidia's GPU debugger you can see the tensor core utilization.

Nvidia's GPU may feel like a black box if you are a gamer. If you are developer you can get a lot more insights but these tools are complex and have no value for gamers.

EDIT:
e.g.
https://developer.nvidia.com/blog/using-nsight-compute-nvprof-mixed-precision-deep-learning-models/

https://developer.nvidia.com/nsight-graphics
 
Last edited:
Caporegime
Joined
18 Oct 2002
Posts
32,618
8bit gaming - really? Theres a reason why FP16 is the very bottom end for math operations for gaming. This isnt 2004 you know


Who said anything about 8 bit gaming? You seem to have no idea why different computations can use different precisions.
 
Soldato
Joined
17 Jun 2004
Posts
7,612
Location
Eastbourne , East Sussex.
Who said anything about 8 bit gaming? You seem to have no idea why different computations can use different precisions.


INT8 is only about 5% (up to 10% in best case) faster than FP16 with similar inferance (Same inference speed for INT8 and FP16 - Deep Learning (Training & Inference) / TensorRT - NVIDIA Developer Forums) , but also suffers from accuracy issues (see other nv dev threads) ; whilst its useful for an DL coding, FP16 minimum is still the way forward.
 
Associate
Joined
6 Dec 2013
Posts
1,884
Location
Nottingham
Specialised hardware will always outperform more generic. In this case, tensor cores are hundreds of times faster than CUs

at 1 specific task and they are crap at everything else. also specialised hardware becomes redundant much quicker, pretty sure tensor cores are nvidia finding a way to say hey we got new stuff specific to us and give us your money. its like when it took them a year to render toy story back in the day on specialised hardware, we can now do it on desktops in much less time (i/e Realtime) your argument about specific hardware over non specific hardware to gain points is completely redundant so you can feel better about the choice you made. give it a break already.
:D:p
p.s not saying NVidia is doing it wrong at all, just saying there are other ways to do things. doesn't make them better or worse, but id rather have 2 options than 1 and atm NVidias single approach means market domination and non of us want that including you fanboys. :)

pps ive argued for both sides in the past
 
Associate
Joined
31 Dec 2011
Posts
834
Another thread where people are arguing over nothing :-S personally I am stoked that AMD is doing this and if it means I get a significant uplift in fps when I need it then great. If the NVidia guys like DLSS2 then great and be happy but why are you upset at a new open standard that brings what looks like a great feature to the rest of us. surely its all about the gaming ?

Hopefully this feature will find its way across to my PS5 as well.
 
Caporegime
Joined
8 Jul 2003
Posts
30,062
Location
In a house
Some saying the end of DLSS lol, when this is great news for Nvidia, means their older PCs (the ones below Turing) can get the FSR, while their newer ones (Turing and up), can get the DLSS.

Its a win for em :p
 
Soldato
Joined
18 Feb 2015
Posts
6,489
The screenshot says native 4K vs Ultra Quality? https://www.overclockers.co.uk/forums/posts/34837742

What I meant by it is - the comparison for assessing its worth should be between alternative upscaling solutions & FSR.

So why would you need another upscaling method in UE5 games if it's that good? Just use TSR and that's it. The quality is good enough and no other method will give you more FPS.

It's not just about UE5 though, and the point is if you can't even match their implementation then you cut out a large part of the market in terms of how worthwhile it is. F.ex. Nvidia decided they'll just be the best so they put out DLSS as a quality alternative (but which requires hw buy-in). So then why spend resources there at all? It's not like AMD is drowning in software devs, they got plenty of things to work on, such as AMD's abysmal support in most professional workloads particularly anything A.I. related. That's why I'm saying this is just a marketing stunt, no different than what DLSS 1.0 was, except there NV used that influence to buy time and then push out something worthwhile - DLSS 2.0. In Nvidia's case they could do that because that's the nature of A.I. In AMD's case with what they've chosen to do they're stuck and nothing short of a complete rework (and abandoning a lot of other users, maybe including consoles) can compete. Except by doing it this way they spent a lot of time badly and are further behind than they were initially.

And besides, they could've done it much better by putting forth a general TSR-like equivalent (and thus accessible to consoles too) which at least would've been helpful because then that's a variant every dev has access to (not just Unreal) and that will save them some time and still is a useful tech. Instead by choosing to restrict themselves to spatial info only AMD chose literally the worst possible solution qualitatively, with it being barely better than not doing anything and just telling devs to add CAS (ala FX CAS upsample). This is basically FXAA 1.5 + sharpening.

Is it just me or is history repeating itself here?

I am getting Freesync vs Gsync vibe.

I remember Gsync being the better choice because it had £100 expensive hardware.

Well look how that ship sailed.

If Amd can pull off FSR to look just like native or there about because I am sure like DLSS it won't be perfect then does it really matter what approach each graphic manufacturer choose?

So long all users get this upsampling feature I couldn't care less about DLSS gives a slight better image if you zoom in 10x you can spot a leaf detail.

It's not the same at all. With Freesync vs Gsync FS could be equivalent qualitatively to GS, the difference being that in general monitor vendors chose to skimp on QA so it wouldn't always end up like that. Here on the other hand FSR will never be equivalent either qualitatively nor performance-wise to DLSS. So when you weigh up that feature it will end up skewing disproportionately in favour of NV GPUs, so AMD will have to be that much faster without it (LOL gl) if buying one is to make sense, sans perpetual shortages. Nevermind how far behind they are in RT performance, if we add all that up RDNA 3 vs Hopper/Lovelace will be even more of a slaughter than RDNA 2 vs Ampere has been, except here they've been lucky with shortages (and that it's still a transition period between the previous gen and what's next-gen).

Tbh I don't know why I even keep paying attention to this, I need to get off my bum and arrange a sale for my 6800. Just been hesitant to do in-person anything what with the bug and all. It's clear to me now that AMD is going to go into another coma period where they try to live off of solely being in the console space while re-calibrating for a future when they (hope to) catch up to NV, but right now they're making all the wrong moves on the GPU front and having an AMD GPU will be a major mistake for the next 2 gens (at least).

Time to keep an eye out for an LHR 3060 Ti.
 
Soldato
Joined
4 Feb 2006
Posts
3,223
Another thread where people are arguing over nothing :-S personally I am stoked that AMD is doing this and if it means I get a significant uplift in fps when I need it then great. If the NVidia guys like DLSS2 then great and be happy but why are you upset at a new open standard that brings what looks like a great feature to the rest of us. surely its all about the gaming ?

Hopefully this feature will find its way across to my PS5 as well.

The fanboys are upset because FSR could dethrone DLSS or make it less likely to be implemented in games. They worship Nvidia and would be over the moon if AMD left the gpu market altogether.
 
Soldato
OP
Joined
6 Feb 2019
Posts
17,831
Yeah because amd would give access to the features before anyone else wouldn't they :rolleyes:

Keep on the damage control path.

it's the same thing ya - Trixx is a solution that only works on Saphire thats the problem - FSR packages the feature into an API that's compiled into the game for maximum compatibility. Same result, different implementation
 
Back
Top Bottom