• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The thread which sometimes talks about RDNA2

Status
Not open for further replies.
Why? What additional advantages will it have? Last I heard it may have inferior image quality but equal performance. Happy to be proved wrong though.

I think the main thing going for FidelityFX Super Resolution is that its not tied to a HW implementation like DLSS, I dont think either will be the final form of this tech tho.
 
It's not that long ago we used to render games at higher res that our monitors supported.
Now here we are where the future is glorified upscaling.
He is also "ignorant" because you don't need that much bandwidth if you use DLSS so that means the 3000 series could have used a 256 bit bus, if the DLSS was the most important feature. But they were built for native 4k and the 8k/60 fps or 8k RT with DLSS, were marketing tricks.
AMD also claimed performance in native 4k that's why they told us that the 128mb cache will solve the 256bit bottleneck.
 
News at 10 : Faster cards are faster, current GPU's cant game at 8k native



Do you have nvidia stock or something ? unless nvidia opens up that solution for other to adopt (consoles ect) OR make the implementation game engine agnostic that will just never happen. Just look at gsync ect.

A DLSS type technology that isn't HW specific may help out but really no one will be looking at 8K rendering for a very VERY long time so for now that (and RT) are small fries in comparison to native rasterization performance. I mean the 1440p / 4k user base is still in the single percentage points

From steam survey (main display resolution) :

1080p : 65%
1440p : 7%
4k : 2%
Other (including 8k) : 2.11%

 
It cant its compute and DLSS runs on tensor cores which is faster.

We haven't seen AMD's tech yet, we don't know the quality / speed, repeating "nvidia is faster" when there is nothing to compare it to is pointless.

Nvidia has DLSS, its cool, it works, you can upscale and play 8k. I think everyone agrees on that. But live lots have said its not a silver bullet in its current form.

AMD Does not, if its a feature you will use (the games you play support it) the add that to the "pro" pile for nvidia.
 
The 30 seriers is better placed to do 8k with double the cores, much higher memory bandwidth and DLSS. 8k games really need that and the 6900xt and 6800xt are no suited. The 6000 seriers has half the number of cores and much lower memory bandwidth. With the weak RT of the 6900xt only the 3090 can do 8k and RT with DLSS of course. The video cherry picks its games for the 8k resolution which means its more of less a waste of time. You can do that with any card but the 3090 is the best placed to run 8k @ 60fps.

The 30 seriers is better placed to do 8k with double the cores

You really do read from the Jenson marketing handbook, its no wonder some people think you are a literal Nvidia shill.

How many cores it has is utterly meaningless, the 10700K has the same number of cores as the 5800X but the latter is per clock 30% faster. Besides that the 3090 has 82 CU's vs 80 on the 6900XT, they both have 64 dual-core SM's, 5120 ALUs and 5120 RT cores for the 6900XT and 5248 ALUs with 5248 RT cores for the 3090, the only difference being Nvidia call their dual SM cores individual CUDA cores because they are both capable of FP32, which is fair enough but Nvidia have 2 more CU's, 128 more shader maps. (2.5%)
 
Last edited:
DLSS performs better because it blurs frames temporally, basically rendering less frames for parts of the image. Moving parts of scenes look super blurry, even at short distances. Its often less effective than txaa / other temporal aa types at removing aliasing.
 
Last edited:
With all the arguing going on, I have one question this Friday.

Can I buy a card yet?

Nope, but the card can hit 2750 MHz. There would be even more exciting reviews being published as we speak and pictures of new AIB variants being put up. That should be worth your while. Kudos to AMD, RDNA2 has been a fantastic feat of engineering.
 
Cards like the Zotac 3090 are Nvidias last stand before AMD stock arrives.

If people want to pay £1,650 for a 13 phase pcb that boosts 150mhz slower than the competition, just because its in stock, that's their call. Zotac gonna have a good year.

The 'hideously overpriced' 16 phase pcb that Powercolor are running on the 6800XT that runs about 5fps slower than the Zotac 3090 in SOTR is apparently 2020's trash tier.

It could be worse though, you could pay £1,750 for a PNY 3090.
 
You really do read from the Jenson marketing handbook, its no wonder some people think you are a literal Nvidia shill.

How many cores it has is utterly meaningless, the 10700K has the same number of cores as the 5800X but the latter is per clock 30% faster. Besides that the 3090 has 82 CU's vs 80 on the 6900XT, they both have 64 dual-core SM's, 5120 ALUs and 5120 RT cores for the 6900XT and 5248 ALUs with 5248 RT cores for the 3090, the only difference being Nvidia call their dual SM cores individual CUDA cores because they are both capable of FP32, which is fair enough but Nvidia have 2 more CU's, 128 more shader maps. (2.5%)

GPU's need more of their own cores/CU's. Faster memory. The 3090 is the closest card we have to 8k @ 60fps max settings. If you turn the settings down you can get 8K gaming going on the 3090 @ 60fps. The 6800xt can't. It lacks the performance. Stop making crap up, you have told over and over when you post total non sense.
 
Status
Not open for further replies.
Back
Top Bottom