Why was it's Achilles Heel lower resolutions then? Also the AIO wasn't "quiet", plenty of posts on here would beg to differ.....
Boohoo, some people get loud/broken samples, doesn't change basic facts. Look up any review, the Fury X was significantly quieter than every other reference card available, simple as that. If it required an AIO to be cooled well, it would have run at 85-90C with the AIO thus showing it would run above that with an air cooler. Running at 50C load with an AIO and 65-70C on a Fury non X with air, doesn't denote the need for watercooling/AIO. Watercooling/AIO is an option that AMD chose to put on the Fury X nothing more or less.
https://www.techpowerup.com/reviews/AMD/R9_Fury_X/30.html
32dba loud compared to 42dba for a 980ti. 10dba equates to just over 2 times the noise iirc, 10 times the noise power but 2 and a bit times the perceived noise level.
http://www.anandtech.com/show/9390/the-amd-radeon-r9-fury-x-review/25
43dba vs 51dba for the 980ti (keep in mind different noise testing equipment, different rooms tested in and different distances from the card tested. The key is consistency between tests on the same site and difference between the cards in any giving testing situation. Saying the card is loud is simply ignoring all evidence. People got dodgy pumps due to a production issue on MSI (IIRC) pumps but afaik was fixed and replacements were available for those who wanted them. The card itself is significantly quieter while running 20-30C cooler than the 980ti depending on the review.
As for what it's achilles heel was... it's almost as if in the post you replied to I already pointed it out.
What makes you think memory gets less fast if you use it less? It's a nonsensical, illogical concept that 512GB/s is slower than less bandwidth if you draw less memory per frame.
If you're accessung 8GB each time the GPU draws a frame(the gpu accesses the memory repeatedly for each frame) at 4k, then you can draw that data 64 times a second at 512GB/s, if you are accessing 4GB each time a frame is drawn, you can do it 128 times a second. But some parts of the GPU can keep up at 64 frames per second but can't at 128 frames. Like you know... always. There is ALWAYS a limit within the gpu every generation. Bandwidth can be a limit, too much really can't be a 'limit', just inefficient to have more than you need. So if the front end can only feed the back end 64 times a second, but the back end can do the work it needs to do 128 times a second, the front end is the limit. If it's the other way around the front end can feed the back end but the back end can't keep up, that is the limit.
Fury X is shader heavy with what it seems to be not enough throughput to keep it fed at lower resolution. It's not filling up the shaders as heavily. As you up the resolution the front end is doing what it did before and the back end can keep up fine.
Only a couple of people keep blaming the HBM, when every other generation it becomes fairly obvious what the limitation is.
For instance I see in Pascal Titan X reviews that it might only be 10-15% ahead at 1080p but pretty normally 25% ahead at 4k and in games in which with lower settings the CPU can push out higher frame rate at 1080p easily. So Titan X shows to be fairly significantly limited at 1080p compared to 4k.... which is obviously down to the HBM.
I've seen one person on here who took it upon himself to slate HBM everywhere he could for everything under the sun, yet ignore that there is a limitation within every past Nvidia and AMD card that I can think of. Every single time without fail that limitation is too little, too little bandwidth, too little memory, too few rops, too few shaders, too few TMUs. Never once too many of anything, particularly when in providing too much bandwidth(more than Fury X could use) it saved power and die size and gave them valuable experience in producing interposer/hbm chips before NEEDING HBM in future chips.