There are limits to how much you can compare to console - some IO and/or hardware features that aren't needed on console will have been removed but will be present on PC silicon, some features are done in software on the console which will be done in hardware on the PC, etc.
EDIT: Also the console architecture is a mix and match of technologies focussed on its task and not the same as rdna 2 on desktop.
That comes out in the wash when you are using transistor counts and density to make estimates.
My point is the estimated die sizes make zero sense to me for their alleged performance targets. N21 @ 500mm^2 makes no sense with 80CUs + 256 bit + cache because what is the point. If you need to make it 500mm^2 anyway you might as well give it a 512 bit bus rather than more cache. If your die size is closer to 330mm^2 without cache though and you want 16GB on the card then it means 512bit and 384bit are out so your only options are HBM or 256bit + bandwidth mitigation. If bandwidth mitigation in the form if Infinity Cache works well, does not balloon the die up above 400mm^2 (Hawaii was 438mm^2 and had a 512 bit bus) and gives you a foundation for MCM GPUs then you take that route but it means the leaked die sizes are totally wrong, which is probable because Series X was leaked to be 405mm^2 and it ended up being 360mm^2.
The other problem with the leaks is that the jump from an 80CU part to a 40CU part is massive. We know it will clock well due to PS5 clocks so perhaps 40CUs + 192 bit bus + cache + huge clocks can hang with the 3070/2080Ti and save some die space but it seems really unlikely to be honest.
To me the following makes the most sense but this is entirely made up by me.
N21 80CU, 256bit, 128MB Infinity Cache and around 380mm^2. This would be 24B transistors and 128MB cache would be around 7B transistors leaving 17B for the rest of the die. Seems doable. 3080/3090 performance.
N22 56CU, 192bit, 96MB Infinity Cache and around 284mm^2. 3070/2080Ti performance.
N23 40CU, 128bit, 64MB Infinity Cache and around 190mm^2. 5700XT/3060 performance.
Those cache numbers are totally made up, if less works then reduce them and save the die area.
Going bigger than those die sizes for those performance targets just does not seem workable to me when AMD are trying to get into the OEM space and they need to show they can keep up with demand. With Zen3, Cezanne and RDNA2 as well as the consoles using up capacity making RDNA2 lean and mean is the way to go.
If the RDNA2 die sizes do end up larger then AMD is going to make the bare minimum it can get away with because Zen3 and Cezanne will be so much more profitable. A 500mm^2 RDNA2 GPU that is a bit faster than the 3080 is going to have a ceiling price of $700. If AMD have to use 500mm^2 of 7nm capacity would they rather make an N21 die or would they rather make 7 zen3 dies? 7 Zen3 dies is almost a full 64c EPYC which will sell for >$4000. When you factor in the board costs for the GPU as well it probably works out that manufacturing 1 64c EPYC part is about the same price as 1 N21 GPU. Would you rather sell the $4,000 + EPYC or the $700 RDNA2 part for the same cost of goods sold?
Making stuff up. I suggest you go back to your source. I've seen this lie inflated to a 2080 Ti.
A 5700 XT is 2560 shaders with a default real world clock of around the 1800 to 1900mhz range.
That was a 4 week port to show off Ray Tracing. It would not have had any optimisation work done by the devs and I doubt the driver team MS/AMD side would have done any work on it at that point either. This is a lower bound of performance and the 2080S is not even 10% faster than the 2080 anyway. 2080Ti is a step to far but in titles that are optimised equally I expect Series X to be around 2080S performance.