That's the whole point of my question.Got a Asus TUF 3080 OC on order as I want RT. I just can't see AMD's gen 1 RT catching Nvidia's gen 2.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
That's the whole point of my question.Got a Asus TUF 3080 OC on order as I want RT. I just can't see AMD's gen 1 RT catching Nvidia's gen 2.
Surprisingly no, AMD has spent more $
![]()
Who'd get one if it was equal to a 3080 in rasterisation but only Turing level RT performance? If yes how much less would you want to pay for that?
This is pretty funny, when ampere is Turing levels of RT performance
Why are ppl so obsessed with RDNA 2 being 80 CUs?
That probably wouldn't double the performance of say a GPU like the 40 CU 5700 XT. We know that because doubling the shader count alone did not double the performance of the RTX 3080 vs the RTX 2080 TI.
Instead, there is around a 1/3 increase in performance vs the RTX 2080 TI. That's because increasing the core count will only boost the TFlop count, not other areas like Texture rate and Pixel rate.
Assuming the Xbox Series X GPU die size is 170-200mm² (not the whole APU), we could see a RDNA 2 GPU with double the die size so around 400mm².
The question is, would increasing the die size of a Series X like GPU by 100% give a similar boost in performance? We know the Series X GPU is much more powerful than the Series S and the whole APU is about 89.4% larger (both have practically the same CPU).
In games that make heavier use of RT the uplift over Turing is considerable - it is when you have heavy use of older non-RT rendering in play and only a small number of effects using RT that the gains are less significant.
Pongwhich games?
Lolwhut? Ampere is no improvement whatsoever in RT performance over aThis is pretty funny, when ampere is Turing levels of RT performance
Quake 2 which is fully path traced.which games?
which games?
Control: 1440p, DX12, High, High RT, TAA
2080 35
2080Ti 46
3080 70
Metro Exodus: 1440p, DX12, Ultra, Ultra RT, TAA
2080 51
2080Ti 66
3080 90
Battlefield 5: 1440p, DX12, Ultra, Ultra RT, TAA
2080 70
2080Ti 92
3080 115
Quake 2: 1440p, RTX, Vulkan, Max Settings
2080 31
2080Ti 41
3080 62
...because everything the leakers say is always wrong?I will bet that whatever the "leakers" are saying it's wrong
The biggest issue with this discussion is we don't know what we don't know. I will bet that whatever the "leakers" are saying it's wrong and there will be a surprise or two.
As for RT, it's definitely the future but still looks a generation or two away for me. Not 5 mins ago the big arguements in here were over a few FPS difference when we were already well over 100 FPS. Now all of a sudden less than 60 FPS which was previously "unplayable" is now acceptable because "shiny".
The same goes for vendor specific technology. I'm not buying on the basis that something might be implemented if developers decide to and could be of variable success. That has pretty much failed every time, SLI, Crossfire, PhysX... currently ray tracing and DLSS seem very sparse in their support. This should improve over time hopefully.
That probably wouldn't double the performance of say a GPU like the 40 CU 5700 XT. We know that because doubling the shader count alone did not double the performance of the RTX 3080 vs the RTX 2080 TI.
Instead, there is around a 1/3 increase in performance vs the RTX 2080 TI. That's because increasing the core count will only boost the TFlop count, not other areas like Texture rate and Pixel rate.
.
So Ampere has shown that current games doesn't scale well after 5000 cores, wonder how that will affect big Navi
yes indeed a smart move and partnership by AMD if as people are saying they are able to use the IP in dGPUsFairly sure Microsoft and Sony bankrolled RDNA development. I think we are in for something special come the end of October.
Different architects. Just because something doesn't scale well on Ampere doesn't mean the same will be true for Navi.
Watching Nerdtechgasms video on the Ampere architect. It seems that Nvidia is going the way of GCN with a more compute focussed card. That is why you will see a greater speed up on compute tasks with Ampere, than on gaming tasks.
It is interesting how paths cross. Nvidia going for a more compute focussed card and AMD going for a more gaming focussed card.