Soldato
https://wccftech.com/meet-cerebras-...ore-than-56-times-the-size-of-an-nvidia-v100/
Nvidia CEO said TSMC 300mm wafer had limited size up to 850mm2 for chips, it cant go beyond 850mm2. Volta V100 was the world largest 815mm2 GPU chip on TSMC 12nm process.
But Jensen Huang got it wrong so now Volta had lost world largest chip crown to Cerebras Systems WSE Wafer Scale Engine's 46,225mm2 AI chip contained 1.2 trillion transistors, 400,000 cores, 18GB SRAM with 9 PB/s memory bandwidth (yes that is 9 petabyte memory bandwidth per second ), 100 Pb/s fabric bandwidth (yes that is 100 petabyte fabric bandwidth per second ), 9 and the chip manufactured on TSMC 16nm process consumed 15 KW power.
I been read about Cerebras a year ago that the CEO was seemed very confident claimed their first chip will win AI, machine learning and data center war and reckon it will beat Nvidia but I dismissed it as utter nonsense so after read the news and I changed my view that Cerebras CEO is very serious about the chip. Guess AMD, Intel and Nvidia are in huge trouble, the big three cant compete with 1.2 trillion chip now. Cerebras WSE chip devastated and unimaginable 100 petabyte fabric bandwidth and 9 petabyte memory bandwidth just made NVLink and future HBM3 and HBM4 looked like obsolete.
Hopefully Jensen Huang had changed his view a while ago after heard about Cerebras Systems WSE realised Jensen can create GPU far beyond 850mm2 and Nvidia can develop Volta successor with 1.2 trillion GPU for data center and the GPU can do full scene ray tracing consume a lot less than 15KW power in 2020.
Nvidia CEO said TSMC 300mm wafer had limited size up to 850mm2 for chips, it cant go beyond 850mm2. Volta V100 was the world largest 815mm2 GPU chip on TSMC 12nm process.
But Jensen Huang got it wrong so now Volta had lost world largest chip crown to Cerebras Systems WSE Wafer Scale Engine's 46,225mm2 AI chip contained 1.2 trillion transistors, 400,000 cores, 18GB SRAM with 9 PB/s memory bandwidth (yes that is 9 petabyte memory bandwidth per second ), 100 Pb/s fabric bandwidth (yes that is 100 petabyte fabric bandwidth per second ), 9 and the chip manufactured on TSMC 16nm process consumed 15 KW power.
I been read about Cerebras a year ago that the CEO was seemed very confident claimed their first chip will win AI, machine learning and data center war and reckon it will beat Nvidia but I dismissed it as utter nonsense so after read the news and I changed my view that Cerebras CEO is very serious about the chip. Guess AMD, Intel and Nvidia are in huge trouble, the big three cant compete with 1.2 trillion chip now. Cerebras WSE chip devastated and unimaginable 100 petabyte fabric bandwidth and 9 petabyte memory bandwidth just made NVLink and future HBM3 and HBM4 looked like obsolete.
Hopefully Jensen Huang had changed his view a while ago after heard about Cerebras Systems WSE realised Jensen can create GPU far beyond 850mm2 and Nvidia can develop Volta successor with 1.2 trillion GPU for data center and the GPU can do full scene ray tracing consume a lot less than 15KW power in 2020.
Last edited: