Caporegime
It’s not idiotic at all, it’s all written by TSMC themselves. They clearly say N4 is part of the 5nm family.
Because it is.... what its not is N5.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
It’s not idiotic at all, it’s all written by TSMC themselves. They clearly say N4 is part of the 5nm family.
No it’s not N5 but I never said it was, I said it was made on TSMC 5nm and it is.Because it is.... what it’s not is N5.
No it’s not N5 but I never said it was, I said it was made on TSMC 5nm and it is.
Again its an enhanced 5nm process not true 4nm, the same with intel optimizing 14nm over a number of years.The clue is in the name, its really simple.
The reason Nvidia is able to cut the BUS is because they used a large cache to compensate even though it doesn't fully compensate, they got away with it to an extent though because AMDs performance sucked this generation.In any case the only reason Nvidia are able to cut the bus width down is due to the quite substantial increase in memory IC speed over the last 2 years, these are increases in performance Nvidia are not passing on to us, instead they use it to increase their margins.
AMD still offer a 256Bit or higher bus at $500+, as they did in previous generations.
Again its an enhanced 5nm process not true 4nm, the same with intel optimizing 14nm over a number of years.
The reason Nvidia is able to cut the BUS is because they used a large cache to compensate even though it doesn't fully compensate, they got away with it to an extent though because AMDs performance sucked this generation.
Yea but it still has benefits as for similar performance AMD use more power to performance. Had AMD paid for this node they may look similar to Nvidia on power consumption. Googled it and it's 11% performance boost at 22% power saving. Probs to steep for AMD in the graphics world but shows that Nvidia had a hand up by going in this direction.It’s an enhanced version of TSMC 5nm called N4 but it’s not 4nm, kind of like with tsmc 7nm having N7 N7P N7+ etc.
AMD didn't leave 15% on the table, they obviously had QC issues and had to scale back performance for stability. Some cards will OC some will not. If they could have got 15% more on each card then they would have done so and charged more for it.I don't think it does.. AMD left about 15% on the table with Navi 32, despite this they are still faster than the 4060Ti and 4070, once you unlock that tabled performance the 7700XT is as fast as the 4070 and the 7800XT as fast as the 4070Ti, they are also very much cheaper and the 256Bit bus helps the 7800XT in higher resolutions, as you would expect.
Damn right they would, AMD are not your friend! Their only friend is money.If they could have got 15% more on each card then they would have done so and charged more for it.
AMD didn't leave 15% on the table, they obviously had QC issues and had to scale back performance for stability. Some cards will OC some will not. If they could have got 15% more on each card then they would have done so and charged more for it.
Most likely they did. I thought Apple were always first to the new nodes though. Either way Nvidia would have been paying a lot but they can afford to with there margins being so high.Rumour is Nvidia paid a lot of money to have N4 all to themselves.
Apple reserved 90% of 3nm already in 2020.... TSMC had some production delays on N3 but yeah, Apple thought they would be knocking out 3nm chips in 2021 or 2022.Most likely they did. I thought Apple were always first to the new nodes though. Either way Nvidia would have been paying a lot but they can afford to with there margins being so high.
Die size is at premium so cache is expensive in terms of die costs. Simple to design though compared to just about anything else.What I find funny is that it must be cheaper to cut the bus width and add extra cache, otherwise Nvidia wouldn't do it. Yet we've been told that cache is expensive and takes up a lot of die space
The problem with cache is it's only useful until it's filled up, then the game relies on vram with its slow bandwidth. So wide bus, high bandwidth vram will be better than cache overall, until such time where we can have a very large amount of cache on the GPU (like 1GB of cache). Some of the early rdna3 rumours said the 7900xtx would have 512 to 1024MB of cache, but ended up as 96MB
I would have thought AMD would have had first refusal being TSMCs second largest customer.Rumour is Nvidia paid a lot of money to have N4 all to themselves.
I would have thought AMD would have had first refusal being TSMCs second largest customer.