Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
All I am doing is nailing ppl here everytime they mislead and post a link the the real facts. Posting an informed opinion is not against the rules of any form. Brigading a thread with pro AMD non sense, so that other normal people cant post. Well work it out.
You are lucky if you got either the 3080 or 6800xt. The 3090 can do 8k with DLSS ultra performance mode. No card can do 8k @ 60fps in many modern games with native rendering. You can do 8k with older games. The 6000 seriers has poor memory bandwidth for 8k gaming and too few cores. Today you are happy if you can do 4k @60fps with an RT title.
If your system is not ryzen 5000 based. Then its likely the 6800xt will lag the 3080 at all resolutions and be slower in RT games https://www.techpowerup.com/review/amd-radeon-rx-6800-xt/35.html https://www.techpowerup.com/review/amd-radeon-rx-6800-xt/30.html. Ryzen 5000 based system the benchmards are closer but the 3080 wins at 4k. At 4k the 6800xt lower core count will put the 3080 ahead. https://www.guru3d.com/news_story/r...t_to_be_7_4_slower_than_geforce_rtx_3080.html The 3080 has more memory bandwidth, this helps with higher resolutions and RT denoising.
I hope you're joking .8k is higher quality than real life, our brains just haven't been activated / woke yet.
8k is higher quality than real life, our brains just haven't been activated / woke yet.
We haven't seen AMD's tech yet, we don't know the quality / speed, repeating "nvidia is faster" when there is nothing to compare it to is pointless.
Nvidia has DLSS, its cool, it works, you can upscale and play 8k. I think everyone agrees on that. But live lots have said its not a silver bullet in its current form.
AMD Does not, if its a feature you will use (the games you play support it) the add that to the "pro" pile for nvidia.
https://arxiv.org/pdf/2001.05585.pdfWhile GPU cores are capable of executing a whole instruction set (i.e., the instructions used in a regular CUDA/OpenCL program), tensor cores are capable of executing one operation but significantly faster; a matrix multiply accumulate (MMA) over 4×4matrices, in one GPU clock cycle.
DLSS performs better because it blurs frames temporally, basically rendering less frames for parts of the image. Moving parts of scenes look super blurry, even at short distances. Its often less effective than txaa / other temporal aa types at removing aliasing.
GPU's need more of their own cores/CU's. Faster memory. The 3090 is the closest card we have to 8k @ 60fps max settings. If you turn the settings down you can get 8K gaming going on the 3090 @ 60fps. The 6800xt can't. It lacks the performance. Stop making crap up, you have told over and over when you post total non sense.
I hope you're joking .
What real facts? You only show some graphs. AMD and NVidia showed graphs in their own event and ohh how wrong they were compared to real life. Like 3080 being twice as fast as 2080 and that was only at 4k, with such a small difference at 1080p and 1440p. AMD showed faster than 3080 and you can't apply those graphs. intel shows graphs being faster than AMD cpu and real life tests show how wrong that is. I am not sure where you get your "facts" from but 7% difference at 4k is irrelevant when price is also 7% difference msrp. AMD RT is still ongoing development, drivers can change performance, we saw that with Turing too, we can see it with Watch Dogs legion too, a game patch can solve a lot, AMD has room to improve through drivers, it's only a 2 weeks old release, DLSS alternative is still to come and we have no base for comparison until that arrives. we only know it's based on DirectX ML.
What real facts? The only real facts i can get now is the fps counter i see on streamers that have their steam fps shown. that is a fact, not a useless graph that every one can do and say hey, i have a rtx3000 and rx6000 when there's high chance they didn't.
You're just ******** on a product that you clearly don't have, just for the sake of arguing, without real arguments. High chance you don't even own one yourself. You're just that bunch that cries, next gen gpu is faster than next gen console, but you don't have even a pc to match both of them. I am highly sure you have next gen gpu just like the rest of this forum has 2 3090 is nvlink.
https://arxiv.org/pdf/2001.05585.pdfWhile GPU cores arecapable of executing a whole instruction set (i.e., the instructions used in a regular CUDA/OpenCL program), tensor cores are capable of executing one operation but significantly faster; a matrix multiply accumulate (MMA) over 4×4matrices, in one GPU clock cycle
Instead of making blanket statements about the validity of what i said try actually stating what's wrong with it.
You can't, you have no idea what you are talking about, everything i said there is factually true, ask me why RDNA2 does better at low resolution vs Ampere which does better at higher resolution, i don't have a simple answer for you, no one does because there are a number of potential reasons for it and none of it anything to do with how many cores they have which differs by 2.5%, 82 3090 vs 80 6900XT, the 6800XT has 72 but that's a 3080 competitor.
PS: they are both as bad as eathother at 8K, the 3090 is NOT an 8K card.
AMD showed faster... https://www.techpowerup.com/review/amd-radeon-rx-6800-xt/35.html
One example is hasty generization high lighted in bold. AMD can improve their drivers is opinion. AMD a companies that has always spent little on their drivers.
AMD will be compute. NVidia will be tensor. Tensor will win by a mile because its faster.
https://arxiv.org/pdf/2001.05585.pdf
You just need special training to see the truth, its actually really cheap, look:
https://www.scientology.org/curious/
Tensor cores are only designed for one job and they do it faster. They excute in one clock cycle making them stupidly faster than compute. Compute is slower but able to do more general things with less limitations.
And again, more pointless graphs, do you have any real proof?
???Then you have to qualify that "with DLSS" which is not 8K.
Its simple, the 3090 does not have "double the cores" it has 2.5% more cores.
8K with DLSS is not 8K, if its 2K upscaled to 8K with post-processing its a 2K render.
Correct?