• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD on the road to recovery.

Well @Grim5 reckons that 7nm TSMC wafers might be closer to $15k:
https://www.overclockers.co.uk/forums/posts/34608712/
Which changes things a bit. Also, it seems that my linked Ethercalc is down, so is an updated version:
https://ethercalc.net/php0rgyrm7ka
vIBC58b.png
Guess an online spreadsheet you don't have sign up to is too good to be true :(
Lots of guesswork about margins, selling prices, defects.

If those margins are about right the have made 270 Million $ Profit on the consoles up to now, probably a bit more at this point at those figures are from Q4 last year.

225 Million $ Profit on Zen 3.

15 Million $ Profit on Navi 21.

If they had use the 67.000 wafers they used on consoles for the 5800X they would have made 9.7 Billion $ Profit, not that they could have sold 43 Million 5800X's :D but it does go to show how much more profit is in the Zen 3 CPU's vs Consoles and GPU's
 
They cost $30, maybe $40 or $50 to get them sales ready, they are not sold to board partners, a $450 5800X is probably sold to suppliers for $300 each minimum, $250 profit on each one.

This is how Intel make Billions, by selling millions of CPU's pocketing $250 on each one sold.
 
Don't know how accurate this is, but ComputerBase report on TSMC revenue by customer
jOBmjPW.png
https://seekingalpha.com/article/44...rtake-taiwan-semi-despite-massive-capex-spend
via
https://www.computerbase.de/2021-03/tsmc-umsatz-apple-amd-huawei/
Huge grown in AMD's share of revenue although what percentage is console is the question. GF 14nm / 12nm stuff would have dropped off a cliff once Zen2 launched despite the IO chip.
Apple is huge though although if they pay for risk production their wafer share vs revenue share will be different from most.
Hi-Silicon were huge until Trump killed them. Don't think China PLC will forget that quickly.

Interesting, although Apple are more than twice TSMC's revenue when compared with AMD they are still the second largest Customer, and it will probably grow.
 
More ARM server coverage on AT:
https://www.anandtech.com/show/16640/arm-announces-neoverse-v1-n2-platforms-cpus-cmn700-mesh
Thought these graphs from Amazon belong here:
7Jl3VN1.png
No surprise that Intel are selling a lot less to Amazon, but going down from 90% in Q1 2019 to under 70% now is quite rapid.
For 2020 for 16% for AMD to have 16% is impressive but of course half of Amazon's new deploys being their own hardware must worry both x86 vendors.
ARM claim +40% ST uplift for the current N2 compared the N1 in Amazon's Graviton2:
wtnKMpo.png
So between these two threads, it is no wonder that Intel's last financials had their server margins decreasing a fair bit. Guess no big costumer pays list prices.


AWS, MS, Google..... these are the whales of the industry, the sort of people Intel and to a much less extent AMD depend on for their revenue, what if they all start making their own in house designs to replace X86? Especially Intel given (As they always love to remind everyone) have 92% market share in this space, i don't think that's a good thing.
 
I guess it depends on who we are talking about:
Cloud providers: very much a good thing.
Intel: not a good thing.
AMD: probably not too much harm as the currently can grow even with Intel loosing marketshare, but longer term? Not in their interest for x86 to loose marketshare.
DIY'ers: longer term probably not a good thing as we get the cast-offs. Plus - by accident not purpose - DIY PC is still relatively open. ARM gives vendors far more scope to lock things down as they all fawn over Apple's margins.
Even if Nvidia buy ARM and alienate all the other vendors, now that Amazon and others have seen what can be done (I also expect to Apple to eventually design a high-end chip for their dwindling 'Pro' market and use that chip for their internal servers), the major cloud vendors would just go down the RISC-V route.

Agreed.

Nvidia's ARM acquisition is looking increasingly unlikely, UK government looks like it may put a block on it and others are starting to get involved.

PS: AMD's financials are due later today.
 
AMD was a favourite for shorts, when AMD started climbing out of the poo and the stock price started to rocket people thought it wouldn't last, only not did it, AMD's share price started to go crazy and it has not come back down again, when you think a few years ago AMD were trading around $2, they are now trading around $80 to $90 and some are predicting $100 before the end of this year.

A lot of people lost a lot of money shorting AMD, there are some very angry people out there and all because they tried to short a stock whose industry they knew nothing about.

AMD's Financials are due later today, predictions are over $3.1 Billion for the quarter, that's up 70% YOY.
 
I should explain shorting.

You borrow the the shares from a broker at the point where you think the price has peaked, sell those shares to someone else, wait for the value to plummet and buy them back at that point, return the shares to the broker, you keep the change.

This is gambling, you're gambling that after buying and selling high with someone else's money the share price will fall so that you can keep the difference once you turn in your borrowed shares, the the sting in the tail is if the share price is higher than the ones you borrowed when the time to turn them in elapses, you have to pay the broker the difference. if you borrow 100,000 $20 shares, sell at $20 a share but by the time the shares are due to be returned to the broker they are worth $60 you have to buy them back at $60, now you've lost millions of $.
 
Last edited:
I mean how can they miss that the Ampere based CPUs,have large monolithic dies? AFAIK,they don't appear to be MCMs,and the package is the biggest ever made for a server CPU.

AMD are using small chiplets,and have the I/O made on lagging nodes,which massively reduces the amount of risk. This is why Intel has so many problems,and we have seen going to chiplets means you need to invest a lot in keep I/O power down,which monolithic designs have less of an issue with. But then making bigger and bigger chips is a risk too.

Plus existing customers can probably use drop in upgrades to existing infrastucture(AMD is famous for doing this - IIRC even the BD based HPC CPUs could drop into ones with the Phenom II based ones). Intel might have problems now,but they have a ton of experience in packaging too,ie,like AMD they are probably going to use different nodes for different parts on the same CPU(they are after all doing their own chiplets).

There is no indication that the Ampere CPUs currently or the immediate future are going that way. So they are going to be huge dies,on a cutting edge process node. That is the issue with many of these ARM based designs,they are very dependent on being on the best nodes.

Unlike the Wafer Scale Engine,which is design to get around yields by huge redundancy,what is the yields on these kind of CPUs? It's all fine and dandy showing the top models,if they end up having yield or volume issues. I would say AMD would have a better chance of making a top tier Epyc and are pricing stuff because they can.

Plus the issue is Ampere might even be paying more per 7NM or 5NM wafer than AMD. So it makes me wonder how much of this pricing is because they having lower margins,and and hope to gain share before prices go up. Is this level of pricing sustainable for them?

If there is an ARM based chip which seems actually groundbreaking - its the Fujitsu A64FX. A CPU system design for homgeneous scalability by having the functionality of a CPU and GPU,and designed to scale. Its utterly ignored by the tech press.

I'll read that later :)
 
Data Centre is all about efficiency which I suspect investors are not always aware of. It's not just about the processing power, it's about how much space you need, how much cooling, how much electricity it uses, etc. That efficiency has a knock-on effect on the cost of all the infrastructure. AMD have hurt Intel in Data Centre not just by having powerful products in that space, but by lowering the total cost of ownership of a given amount of processing power compared to rivals.

If AMD was really smart they would be touting their "Green credentials" in this climate concious environment, there are a lot of "extremists" or as i like to call them "useful idiots" willing to dedicate their lives to shaming these whales who use these CPU's into bringing their power requirements down, this sort of mob mentality works, look at all the other insane social justice crap that keeps gaining traction, twitter mobs do rule.
 
I mean, no one NOW actually believes the BLM founder is an actual Marxist who cares about anyone or anything but the four mansions she now owns paid for by donations. But its had a lot of useful idiots on their knees, including politicians, wow.....
 
I mean how can they miss that the Ampere based CPUs,have large monolithic dies? AFAIK,they don't appear to be MCMs,and the package is the biggest ever made for a server CPU.

AMD are using small chiplets,and have the I/O made on lagging nodes,which massively reduces the amount of risk. This is why Intel has so many problems,and we have seen going to chiplets means you need to invest a lot in keep I/O power down,which monolithic designs have less of an issue with. But then making bigger and bigger chips is a risk too.

Plus existing customers can probably use drop in upgrades to existing infrastucture(AMD is famous for doing this - IIRC even the BD based HPC CPUs could drop into ones with the Phenom II based ones). Intel might have problems now,but they have a ton of experience in packaging too,ie,like AMD they are probably going to use different nodes for different parts on the same CPU(they are after all doing their own chiplets).

There is no indication that the Ampere CPUs currently or the immediate future are going that way. So they are going to be huge dies,on a cutting edge process node. That is the issue with many of these ARM based designs,they are very dependent on being on the best nodes.

Unlike the Wafer Scale Engine,which is design to get around yields by huge redundancy,what is the yields on these kind of CPUs? It's all fine and dandy showing the top models,if they end up having yield or volume issues. I would say AMD would have a better chance of making a top tier Epyc and are pricing stuff because they can.

Plus the issue is Ampere might even be paying more per 7NM or 5NM wafer than AMD. So it makes me wonder how much of this pricing is because they having lower margins,and and hope to gain share before prices go up. Is this level of pricing sustainable for them?

If there is an ARM based chip which seems actually groundbreaking - its the Fujitsu A64FX. A CPU system design for homgeneous scalability by having the functionality of a CPU and GPU,and designed to scale. Its utterly ignored by the tech press.

Yeah.

From another thread.

It looks like for the next few years at least Intel CPU's are still monolithic, it seems Intel are using the Big + Little to make up the numbers, because if you can't match your competitor for raw core count fudge it and hope no one thinks about it too much, in marketing terms that has been Intel's MO for several years.

AMD have a central foundation die to which you literally plug cores into, want 16 cores? plug another core cluster into it, want 32 cores? plug 4 core clusters into it, 64 cores? Well plug 8 into it then.... there are limits but those limits are how many core clusters you can fit on a PCB, or how big can you make the PCB.

Its ingenious and at this stage AMD have this technology so nailed down there are no drawbacks.



Can you imagine how big a 64 core monolithic CPU would have to be, even on Intel's 10nm or TSMC's 7nm? about the size Nvidia's A100 and those things are ridiculously huge, how many of those do you get out of a $15,000 wafer? 20? that's why they cost $12,000 a pop.

AMD can make those CPU's for a few hundred $ wafer cost and sell them at $4,000, that's a lot of change they are getting for it, Intel couldn't even sell them for that, its below cost.

Makes you wonder why Intel still haven't borrowed that AMD glue, maybe its not that easy.

AMD have critical tech on Intel and increasingly the are going to feel it.
 
Out of interest i ran a quick comparison on the A100 die and the Zen 3 CCD.

A100: 24 good dies.

Zen 3: 670 good dies.

$15,000 / 24 = $625 each A100 wafer cost

$15,000 / 670 = $22 each 8 core chiplet, x8 = $176 to make up a 64 core CPU.

Not as dramatic as i first thought but..

Edit: the IO die is GloFo 14nm and costs peanuts.

lYTk6J6.png

SjmloOG.png
 
Overall sales, worldwide, would see huge sales of 11400 CPU's from Intel, as it's such a good value CPU. AMD need something to counter this, else I suspect AMD will lose market share each week there's no answer.

More info on this excellent budget choice here > https://www.tomshardware.com/uk/news/intel-core-i5-11400-review

Would that big river in South America online shop be a major world wide store? the 11400 is never even in the top 20 best sellers, while the 5600X, 5800X and the 5900X now are consistently in the top 5.

The 3600 still outsells the 11400 range by multiple factors, AMD don't have to do anything, they sell 8X more CPU's than Intel.

Do you know why the 3600 sells so well? it uses no power and generates no heat, you can use its tiny box cooler, stick it in a cigar box case with a mini GTX 1060 and forget about it.

Its the perfect micro box CPU that actually has some muscle.
 
Last edited:
Back
Top Bottom