No i don't, i'm still interested to know what @LambChop would do.
You know the answer to that one surely
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
No i don't, i'm still interested to know what @LambChop would do.
You know the answer to that one surely
Can you imagine the price of that titan though. £2.5k incoming!
Yeah i think i do....
We could be wrong!
Edit: if RDNA2 has Variable Rate Shading it will be performing better per clock - per shader, and RDNA is already the same IPC as Turing without that.
I see this mentioned a lot and it just highlights a fundamental misunderstand of VRS.
VRS is not free performance. It requires explicit developer support and is not a uniform performance enhance under most uses as it will depend on surface rendering details (e.g., a smooth textured uniform lit surface can be rendered at a lower rate than a complex surface).
So far veyr few games support this, where it is supported performance gain are typically 2-4%. Turing already supports VRS, so any game that will have VRS added in the future will automatically help Turing and Ampere.
There also isn't a difference in implementations so it is not like AMD's VRS will magically be better than nvidia's The performance gains will be near identical.
as for IPC, that i impossible to compare between generations let alone between IHVs and in general for GPUs is utterly meaningless.
VRS does not require developer support, its determined at the hardware level with the DX12 API.
VRS allows developers to selectively reduce the shading rate in areas of the frame where it won’t affect visual quality, letting them gain extra performance in their games
Our VRS API lets developers set the shading rate in 3 different ways:
- Per draw
- Within a draw by using a screenspace image
- Or within a draw, per primitive
No, this is completely wrong
https://hexus.net/tech/news/graphics/128588-microsoft-adds-variable-rate-shading-support-directx-12/
https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/
The Developer has to actively select which draw call or even which triangles will be shaded at which rate.
No time to read those links, but wondering who is wrong? It is always fun to see D.P. get owned. Not so much with humbug, he gets owned quite often
Too many flops from the gpu wing over recent years means I no longer buy the pre-hype - just wait for the actual before weighing it up. If they can produce anything like the cpu model then it would be awesome for all consumers having concrete choice regarding performance whichever side you chose.
Same thing could easily happen in the gpu space.
I always believe it could its a long shot though as the budget between them both is rather large. Also nvidia will have seen how ryzen has wobbled intel over the past few years so its not like they can have an excuse.
From sectors AMD does not compete in, or is only just beginning to.Yep, they 'wobbled' Intel so much it caused them to have a record Q4 in 2019
From sectors AMD does not compete in, or is only just beginning to.
Their desktop business might as well be in ruins by the end of this year.
Also, looking at the details of that DOE supercomputer contract that was announced today with Zen 4 and likely the 5nm shrink of Arcturus, both Intel and NVIDIA might be in for a barren stretch in the very lucrative exascale space. It appears Zen 4 will be an even bigger jump than Zen 2 or Zen 3, given what they're doing with Infinity Fabric.