You're missing the point. People are speculating about this, not because they're in denial about Vega but that Vega doesn't make any logical sense. Double the transitors for a marginal increase in speed is unheard of.
It doesn't add up at all, and the reason people are referencing Ryzen is because they did something similar by pretending the IPC was lower than it really was. Intel were caught off guard by Ryzen, so the point is that it's entirely possible that AMD are trying to catch nVidia off guard too, as that's really one of the only ways they're going to start scraping back market share.
But as I said the success of Ryzen isn't indicative of Vega, but it's a suggestion that AMD might be up to something with how it doesn't add up.
i don't think there is twice the transistors for starters, maybe something like 70% more. I do agree that that is perhaps the most difficult part to explain away, but we don't know what else is in the core. A lot of that transistor budget may be related to HPC and compute, e.g. how much did the HBCC controller take up?
The fact of the matter is, AMd hasn't increased the stream core count form fii at all, and are relying on clock speed improvements. If they failed to hit desired clock speeds due to process issues or other variables then it simply wont live up to the theoretical performance.
vega at its typically clock speed has less compute and less bandwidth than the 100ti. AMD cards have traditionally needed significantly more to be competitive. So on face value we really would expect Vega to be only marginally above the 1080. AMD should have achieved some efficiency improvements, but remember Vega is just the 5th iteration of GCN, it is not comparable with a ground up design like Ryzen Some of the fundamental problems that experts believed limit fiji are still present to some extent. there are still only 4 shader engines trying to feed 1024 GCN cores. Achieving a balanced load and decent geometry throughput may still be a challenge.
Fiji is supposed to have a tile based rendering system for hidden surface removal. But this is very complex and requires very advanced drivers. AMD may have made some design mistakes, or did an Intel Itanium and designed hardware functionality that is incredibly difficulty to program for. There could be very complex interactions between the TBR functionality and AMD's task scheduling. Nvidia removed some of the hardware related to scheduling into the drives to both save a lot of hardware complexity, but to also improve the flexibility with different workloads that can increase performance. It may be that fixed hardware scheduling just doesn't lead to the efficiency gains seen by using a partial software based optimization of the wokload.
In a simialr vain, GCN architecture has never had strong geometry performance and in fact thigns have got relatviely worse sicne AMD have kept the 4 shader engines and simply deepened the stack of GCN cores making h PGU less balanced. the Vega hey haven't increased shader engines to 6 or 8 with a gemoertry engine in each, they kept to the 4. AMD have talked about a new geometry pipeline but details are vague and msot people believe this i=will require explicit programming within the game enigne, or at besst very careful driver
AMD have shown repeatedly recently that they prefer hardware based solutions, & brute performance with a lighter driver stack. Nvidia have flip-flopped between these, but since Fermi have been very much along the line of simplifying hardware and have very complex drivers. The simpler hardware is less liekly to be faulty, can have additional cores or be made smaller and simply run faster - Nvidia GPUs have recently had big clock speed advantages. NV always pride themselves on the software, it is what they excel at. that is why CUDA is the de facto industry standard for compute, and Nvidia harwdare with less theoretical compute can often beat (or at least catch up) AMD"s brute force approach.