• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

meanwhile in China...

SMIC's forecasts cut to ongoing struggles. Currently SMIC is able to put out 700 wafers per month, from this it can create 78 7nm GPUs per wafer for Huawei, of which 70% fail, giving a 7nm GPU production of 163k units per month, or 1.9 million GPUs produced per year

Where as based on JPR data Nvidia is currently selling around 4 million GPUs.. per month..

This poor volume and yield from the Chinese and the Chinese governments demand that local companies use Chinese GPUs means the demand will far exceed supply for such a long time that we'll probably never see a decent gaming GPU exported from China anytime soon

How does Morgan Stanley know the yield rates for Chinese fabs? Moreover why would China willingly give up such information?

We had the same people saying YMTC had severe production issues,but YMTC NAND is popping up in a lot of SSDs.
 
Last edited:
How does Morgan Stanley know the yield rates for Chinese fabs? Moreover why would China willingly give up such information?


Like most SMIC leaks, it comes from speaking to Huawei employees, so it's not first hand but its employees of SMIC customers
 
Last edited:
Like most SMIC leaks, it comes from speaking to Huawei employees, so it's not first hand but its employees of SMIC customers
I assume it's unnamed employees all the time. Like with YMTC being doomed or China never being able to do 7nm or 5nm because of no EUV. Then when they do it moves to but they can't make a lot, etc.But there are lots of SSDs available in the UK with YMTC NAND and Huawei tablets sold in the UK are using SMIC 5NM/7NM.

If they were having all these production issues they wouldn't be exporting these products!
 
Last edited:
I don't get the geopolitical football fan attitude on this topic.
We're all getting shafted because advanced nodes are basically under a monopoly and our hobby is getting harder and harder to afford.
Chinese platforms might lack refinement especially on the software side but they are fairly reliable as far as consumer electronic goes and if by grit and political will they will be able to somehow match Intel/Samsung/TSMC it will result in more capacity which should lower prices for the rest of us.

Some of you might be too young to remember but life was definitely better when politicians in the west had a modicum of commie scare, if somehow China could achieve something similar with our current techno-feudal corporate overlords we can only benefit, if only by lower prices.
 
While more fabs producing basically anything are very welcome, the current "monopoly" is mostly economics.

That is, even those not under an American sanction - that is TSMC, Intel, and Samsung - and who do have access ASML EUV equipment... Well only TSMC is able to keep up.

Ah the end of the day silicon fabs are extremely expensive. Almost nation-state national budget expensive. And then some of the most important factors become volume, volume, and volume.

Intel's problems are almost laughable in a way as throughout the 80s and 90s and even the early 00s Intel had what?

The x86 CPU which actually mostly pretty awful as compared to its CISC rivals like the Motorola 68k. It had no clean 32 bit address space, registers and extremely hard to manage 64kb pages and stuff. Truly awful to program. And the first RISC designs weren't that much later. And many brought workstation reliability features.

Yet, Intel became all powerful due the high volumes IBM's decisions to use the i8086 brought. All other players - no matter if technological superior - eventually folded. The low volume, high margins workstation and server vendors couldn't compete with Intel's high volume, lower margin model.

I put that down to volume, volume, volume. Intel however seems to have put that down to how great Intel x86 was and hence turned down any lower margin business no matter how high it's volume might become (Apple's iPhone chip, Intel Atom relegated - like Intel's new chipsets every generation - to fill older fabs).

As for SMIC and YMTC, I'm sure they'll continue to make progress. Not that throwing resources at problems necessarily works but if the world's second largest economy wants to be less reliant on tech controlled by the Americans, they will find a way. For EUV things are probably too late so they are focusing on what comes next. With how Huawei ended up dominating 5G tech and patents, I would not bet against them. In the meantime SMIC will have to multi patterning DUV. Which is far slower so they will need more fabs.
 
Last edited:
They did find a way to do EUV but it basically relies on building particle accelerators, so I'd wager the volume on that is not exactly significant.
Still, this is not the 1990s and it's not like the latest gen stuff is THAT much faster.

Let us say that Chinese chips are able to reach approximately RTX 2000 gen hardware performance (12nm) in good volume: if we use TPU database and consumer cards as reference it means that they can in the worst case scenario make cards that are approximately 3X slower than the cutting edge, which for a massively parallel problem as AI training just means they need to add more cards (or data centers) at the problem and this is discounting the fact that you can trade GPU performance for memory in many tasks.
Energy is more of an issue but they are working on a new dam 3 times the sizes of the largest currently in operation (which is also Chinese) so I doubt it's much of an issue, especially as they can get Russian gas at high discount as fast as pipelines can be built.

I know that we focus on gaming but on the AI side (which is what uses most GPU volume) China is currently ahead imho. Deepseek is the most famous example but it's already old stuff, try Kimi with researcher mode and ask it a moderately complex task, in my case for marketing and sales use cases it's a massive force multiplier and my jaw hit the floor several times when I used it for the first time.
 
Isn't most AI training better done by smart memory not GPUs anyhow?

Just because Nvidia can only think of selling GPUs does not mean that general purpose GPUs are the best for this.

Obviously because of US-lead sanctions China will be exploring this heavily but most cloud providers must be looking at bypassing Nvidia too as their margins are insane and AI training is probably not as moated as Nvidia would like. Yes, the mindset is GPU-first but companies investing billions should be able to find people willing to think outside that box.

China also has by far the biggest installs - and the most rapid growth - of renewables. While the mantra is still "growth above all else", they do seem to realise that they need to curb polution. Helped somewhat by a most of the leadership having an enginering background, or at least far less lawyers and the like than in the West.
 
I'm not an expert on LLM training but I do believe both western and Chinese players rely on GPGPUs for training, from my experiments on LM studio CPU is definitely not as fast...
In any case, the sooner anything lowers demand on TSMC the better likelihood of consumers benefitting from lower prices.
 

That's cute, lol talking about LLM parameters in the billions.

It's not too bad though, it could be worse, they're about 5/6 years behind Nvidia, assuming all their claims are accurate and their cards are stable and perform well (unlike their first two attempts)

Not sure about the Cuda claim, since that would be a breach of IP and and pretty much mean you'll never see this card out of China or get sued into oblivion, but doesn't really matter - as I've mentioned before you won't see any indigenous Chinese GPU's sold outside China for a very long time, at least 10 years
 
Last edited:
That's cute, lol talking about LLM parameters in the billions.

It's not too bad though, it could be worse, they're about 5/6 years behind Nvidia, assuming all their claims are accurate and their cards are stable and perform well (unlike their first two attempts)

Not sure about the Cuda claim, since that would be a breach of IP and and pretty much mean you'll never see this card out of China or get sued into oblivion, but doesn't really matter - as I've mentioned before you won't see any indigenous Chinese GPU's sold outside China for a very long time, at least 10 years
GPT-5 launch proved that we're firmly in diminishing return land when it comes to increased model size giving improved performance.
Misture of experts along with a routing model is where the cutting edge is right now and if you're smart with your architecture a lot of tasks are easily achievable with a 32b model or even smaller.

IMHO the cutting edge right now is in China for LLMs, try Kimi researcher mode and set it at a moderately complex task (meaning something like market research instead of doing PhD work). Success rate won't be 100% but you can kiss goodbye to many junior analyst roles, that thing is even able to teach itself how to solve a problem and actually calls python to solve maths instead of trying to do in inside the LLM itself.
 
Back
Top Bottom