You should look at the release schedule for Intel’s 22nm process where they led with quad cores for both desktop and mobile and followed up with dual cores later:
https://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)#Desktop_processors
Then look at Nvidia at 16/14nm and see the order in which they released the 10 series cards:
https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_10_series
And what will be the first AMD GPU on 7nm be?
A massive chip for AI/HTPC.
Seriously, first of all you need to realise that AMD aren't first to 7nm and Nvidia weren't first to 16nm from TSMC, not by a mile and both process nodes were based on the 20nm metal layers that had been used a couple of years before, 16nm was a pretty mature process by the time dedicated GPUs were being made on it. The first chips made on 16/14/12/10/7nm were all <100mm^2 ARM based chips. Then on top of that the 1080 is not a giant die and neither is 7nm Vega a giant die.
Then you need to actually understand what Intel did with 22nm. First of all they launched a pretty small 160mm^2 die and there were plenty of salvaged dual core chips released at the same time. Intel launched a dedicated dual core chip as yields got higher. Why, because there were less salvaged parts from the quad core die so every dual core was losing money as most could be sold as quad cores and a dual core die with higher yields means almost no failures so no real need to have salvaged products. If you start on a lower yield process with a dual core die where maybe 30-50% would only work as a single die then you have a rubbish product stack. On 14nm and 10nm starting with a 160mm^2 core would simply have had too low yields, too many dual cores and not enough quad cores to fill demand, which is why on both nodes which both had delays and major issues, we've already seen that the first chips on both nodes were small dual cores. With 14nm they felt the need to drop to a dual core and did okay with yields, with 10nm they stayed with dual core (less simply has no value any more) and the yields were so bad they further delayed the process by a minimum of 1.5 years.
Increasingly complex processes are making production harder and this is a trend across every company and every foundry. Despite the ridiculous inclusion of Nvidia and AMD because neither produce GPUs on their own nodes and the nodes they use the gpus aren't anywhere near the first chips on the process... lets tear apart that point as well.
On 110nm Nvidia had a 300mm^2 die, their first 90nm part was 200mm^2. The 8800gtx was their second 90nm part 6 or so months later at 484mm^2. The first 65nm parts were 324mm^2 then around a year later they did maybe their first 'huge' die at 576mm^2 with a 280gtx, it had yield issues though not terrible.
Their first 40nm chip was not Fermi, but a 310 in November 09 and was a massive.... 57mm^2, the second chip was a 340 (and 3 salvaged parts) in Feb 2010 at 144mm^2, then with Fermi in March they finally launched a 529mm^2 chip. Of course it was meant to be first and it was 'launched' twice before without success due to horrendous yields and power issues and when it did launch it was hot, late, expensive and barely faster than an already easily produced and out for 6 months and was only 334mm^2. See a difference, way smaller, great yields, easy to produce, Nvidia massive die, complete disaster?
AFter the large die disaster at 28nm the first two dies were 118 and 78mm^2 and their main part was a 294mm^2 680 gtx, A full year later Nvidia launched the 561mm^2 die and it was launched without all shaders/tmus working. It was only with a new stepping 9 months later that fully working dies were launched.
So as processes got more complex Nvidia couldn't make their big dies early on a process could they despite your insistence. THe 1080 follows that, it's closer to half the size of their biggest 28nm part, the new Titan is ~20% smaller than the old titan and again was launched in a more consumer friendly version long after the 1080 was. This on top of the fact that the 1080 and Polaris were just about the longest gap between release of a node and gpus being made on it that we've ever seen.
So yeah, the industry trend of increasing inability to make bigger chips earlier on a new node as node complexity increases is easily provable, it's easily seen by things that have actually happened and lets reiterate, Intel already tried and failed to launch a dual core chip first on 10nm.
Intel have already clearly stated that they are releasing 10nm server chips well beyond the time scale they have stated for their first 10nm chips so your whole 70 v 700 mm^2 argument is completely bogus as nobody has suggested they are doing that.
^10.
Great point.... except that is a point I never made. Go read my first post, it wasn't 'arguing' or rebutting anyone, I was giving my impression of when Intel would be able to first make a die large enough to compete with a 64 core EPYC, nothing more or less. You replied calling it wild speculation. You can't just decide to pretend i was arguing some point with someone which makes my opinion bogus because you want to, that isn't how well, anything works.
I posted my opinion, you attacked that opinion and I gave logical reasons to back up my opinion.