Associate
- Joined
- 27 Apr 2007
- Posts
- 966
So seemingly Samsung have no recent history of producing high wattage and high performance silicon for CPUs or GPUs and the GTX 1050 doesn't count in this context.
Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
So seemingly Samsung have no recent history of producing high wattage and high performance silicon for CPUs or GPUs
I dont believe for one min that Nvidia could just "drop" a 7nm chip at a moments notice, if they could they surely would have already.
If they weren't actually produced by Samsung then that's a no then.High performance might be pushing it but Vega 56/64 are produced on what is originally Samsung's 14LPP node - I wouldn't write off their capabilities at producing complex, high performance GPUs.
So seemingly Samsung have no recent history of producing high wattage and high performance silicon for CPUs or GPUs and the GTX 1050 doesn't count in this context.
If they weren't actually produced by Samsung then that's a no then.
I'm not writing them off but curious if they have actually fabricated a high performance and high wattage part yet.
I'm out of the loop.
I correctly ignored what you've said. Since you don't even mention that nvidia will probably use Samsung's 7nm process and not TSMC's.
Meaning you ignorantly ignored it. Samsung 7nm EUV (LPP) uses fewer layers of EUV than TSMC 7nm+ AFAIK. Since the latter uses 4, and the former does it for <20% of layers, total number of which is below 20. Samsung is also a larger pitch, and generally considered a lower power rather than higher performance solution than TSMC's 7nm+. Everything else being equal, Samsung 7nm LPP (EUV) will not perform as well as TSMC 7nm+ (EUV). But of course either could botch it, and Samsung will have more wafers sooner. Albeit as with TSMC, they will remain unavailable for a long time after launch to anyone but the mobile chip makers and their colossal order books.
Also, unless AMD have sewn up so much of the scraps left over from the mobile industry that TSMC are not tenable for NVIDIA's next major round of releases, I highly doubt they'll go Samsung for the majority. Not only would it potentially be seen as a risk by investors, and NVIDIA's share price is now under significant pressure, but Huang is incredibly close to the TSMC brass. I think they would have to be pushed, rather than jumping.
Furthermore, you're now moving the goalposts from DUV to EUV.
Also, unless AMD have sewn up so much of the scraps left over from the mobile industry that TSMC are not tenable for NVIDIA's next major round of releases, I highly doubt they'll go Samsung for the majority. Not only would it potentially be seen as a risk by investors, and NVIDIA's share price is now under significant pressure, but Huang is incredibly close to the TSMC brass. I think they would have to be pushed, rather than jumping.
Ice Lake will have PCI Express 4.0 support to compete with Ryzen 3000 series
But at least all AMD has to do is change the I/O dieAMD and Intel should had skipped PCI Express 4.0 and straight to PCI Express 5.0 instead.
But at least all AMD has to do is change the I/O die
Aren't you forgetting the PCIe controller on the CPU which is used for a GPU and a primary SSD?But at least all AMD has to do is change the I/O die
Aren't you forgetting the PCIe controller on the CPU which is used for a GPU and a primary SSD?
No, it's on the I/O die in the CPU package.Aren't you forgetting the PCIe controller on the CPU which is used for a GPU and a primary SSD?
lolwhut? You are loving the hyperbole these days, aren't you.And then 20 years more without any change. No, thanks. Better stay with PCIe 4 for 5 years, and then move to PCIe 5, if devices which support it will have arrived at all.
lolwhut? You are loving the hyperbole these days, aren't you.
Of course!No, it's on the I/O die in the CPU package.
Of course!
So they still need to significantly crank up the bandwidth between the cores chiplet(s) and the I/O die.
Wonder what headroom they have with the current design?