• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD on the road to recovery.

The entire industry makes smaller chips first and brings in bigger chips as process maturity leads to less defects and the ability to make larger dies in a financially viable manor.
You should look at the release schedule for Intel’s 22nm process where they led with quad cores for both desktop and mobile and followed up with dual cores later: https://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)#Desktop_processors

Then look at Nvidia at 16/14nm and see the order in which they released the 10 series cards:
https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_10_series

And what will be the first AMD GPU on 7nm be?
A massive chip for AI/HTPC.
 
I bet Intel will not be able to produce another competitive secure cpu for the next 3 to 4 years. Simply because they have to start from the ground up, because they have been lying to the world for years.
Don’t forget that they have Jim Keller now. In three years he came up with both K12 and Zen and brings all that knowledge and experience to the drawing board.
 
Don’t forget that they have Jim Keller now. In three years he came up with both K12 and Zen and brings all that knowledge and experience to the drawing board.

Keller is one medium sized cog in a huge machine. He is a team leader, and obviously a good one, but he did not "come up with both K12 and Zen".

Michael Clark is equally (if not more) responsible for "coming up" with Zen, as are the several hundred engineers who did the grunt work...

Just saying.
 
You should look at the release schedule for Intel’s 22nm process where they led with quad cores for both desktop and mobile and followed up with dual cores later: https://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)#Desktop_processors

Then look at Nvidia at 16/14nm and see the order in which they released the 10 series cards:
https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_10_series

And what will be the first AMD GPU on 7nm be?
A massive chip for AI/HTPC.

Seriously, first of all you need to realise that AMD aren't first to 7nm and Nvidia weren't first to 16nm from TSMC, not by a mile and both process nodes were based on the 20nm metal layers that had been used a couple of years before, 16nm was a pretty mature process by the time dedicated GPUs were being made on it. The first chips made on 16/14/12/10/7nm were all <100mm^2 ARM based chips. Then on top of that the 1080 is not a giant die and neither is 7nm Vega a giant die.

Then you need to actually understand what Intel did with 22nm. First of all they launched a pretty small 160mm^2 die and there were plenty of salvaged dual core chips released at the same time. Intel launched a dedicated dual core chip as yields got higher. Why, because there were less salvaged parts from the quad core die so every dual core was losing money as most could be sold as quad cores and a dual core die with higher yields means almost no failures so no real need to have salvaged products. If you start on a lower yield process with a dual core die where maybe 30-50% would only work as a single die then you have a rubbish product stack. On 14nm and 10nm starting with a 160mm^2 core would simply have had too low yields, too many dual cores and not enough quad cores to fill demand, which is why on both nodes which both had delays and major issues, we've already seen that the first chips on both nodes were small dual cores. With 14nm they felt the need to drop to a dual core and did okay with yields, with 10nm they stayed with dual core (less simply has no value any more) and the yields were so bad they further delayed the process by a minimum of 1.5 years.

Increasingly complex processes are making production harder and this is a trend across every company and every foundry. Despite the ridiculous inclusion of Nvidia and AMD because neither produce GPUs on their own nodes and the nodes they use the gpus aren't anywhere near the first chips on the process... lets tear apart that point as well.

On 110nm Nvidia had a 300mm^2 die, their first 90nm part was 200mm^2. The 8800gtx was their second 90nm part 6 or so months later at 484mm^2. The first 65nm parts were 324mm^2 then around a year later they did maybe their first 'huge' die at 576mm^2 with a 280gtx, it had yield issues though not terrible.

Their first 40nm chip was not Fermi, but a 310 in November 09 and was a massive.... 57mm^2, the second chip was a 340 (and 3 salvaged parts) in Feb 2010 at 144mm^2, then with Fermi in March they finally launched a 529mm^2 chip. Of course it was meant to be first and it was 'launched' twice before without success due to horrendous yields and power issues and when it did launch it was hot, late, expensive and barely faster than an already easily produced and out for 6 months and was only 334mm^2. See a difference, way smaller, great yields, easy to produce, Nvidia massive die, complete disaster?

AFter the large die disaster at 28nm the first two dies were 118 and 78mm^2 and their main part was a 294mm^2 680 gtx, A full year later Nvidia launched the 561mm^2 die and it was launched without all shaders/tmus working. It was only with a new stepping 9 months later that fully working dies were launched.

So as processes got more complex Nvidia couldn't make their big dies early on a process could they despite your insistence. THe 1080 follows that, it's closer to half the size of their biggest 28nm part, the new Titan is ~20% smaller than the old titan and again was launched in a more consumer friendly version long after the 1080 was. This on top of the fact that the 1080 and Polaris were just about the longest gap between release of a node and gpus being made on it that we've ever seen.


So yeah, the industry trend of increasing inability to make bigger chips earlier on a new node as node complexity increases is easily provable, it's easily seen by things that have actually happened and lets reiterate, Intel already tried and failed to launch a dual core chip first on 10nm.


Intel have already clearly stated that they are releasing 10nm server chips well beyond the time scale they have stated for their first 10nm chips so your whole 70 v 700 mm^2 argument is completely bogus as nobody has suggested they are doing that. :rolleyes:^10.

Great point.... except that is a point I never made. Go read my first post, it wasn't 'arguing' or rebutting anyone, I was giving my impression of when Intel would be able to first make a die large enough to compete with a 64 core EPYC, nothing more or less. You replied calling it wild speculation. You can't just decide to pretend i was arguing some point with someone which makes my opinion bogus because you want to, that isn't how well, anything works.

I posted my opinion, you attacked that opinion and I gave logical reasons to back up my opinion.
 
Last edited:
The Ryzen CPU's are selling incredibly well, but so they should be, it actually feels like were giving them away with 8/16 CPU's from £169.99, seems too cheap but if AMD can do it and still make money, then great, just hope they are making money whilst been so aggressive.
Could be a stroke of brilliance in the long term when combined with the long term support of the AM4 platform. They moving their old stock and hopefully the people who buy in now will upgrade at least once on the same platform in a couple years.
 
Then you need to actually understand what Intel did with 22nm…..
Yep, they released the high end quad core mainstream parts first as I already showed; end of story. That is the discussion and you are trying to move the goalposts now because I proved you to be wrong.

So as processes got more complex Nvidia couldn't make their big dies early on a process could they despite your insistence. THe 1080 follows that, it's closer to half the size of their biggest 28nm part, the new Titan is ~20% smaller than the old titan and again was launched in a more consumer friendly version long after the 1080 was.
You are trying to do the same with Nvidia by talking about big dies which was never the discussion. To get you back on track what I actually said was it’s unknown whether Intel will release smaller laptop chips or medium sized mainstream desktop parts first with no mention of the larger HEDT/Server chips which Intel have already indicated will come later.
The 1080 is a medium sized chip by GPU standards and of course it’s much smaller than the 28nm parts as the jump to 16nm is unusually large; that’s how these things work you know!
They released it before the 1060/1050/1030 which all use increasingly smaller die sizes. Even the Titan X which has a larger die again was released within weeks of the 1060.
Now jog on and sober up and stop bothering me with your drunken nonsense.
 
tenor.gif
 
Keller is one medium sized cog in a huge machine. He is a team leader, and obviously a good one, but he did not "come up with both K12 and Zen".

Michael Clark is equally (if not more) responsible for "coming up" with Zen, as are the several hundred engineers who did the grunt work...

Just saying.
That is true, I didn't mean to imply he did it single-handedly. But he was lead architect on both projects and is generally regarded as a very important person in the world of CPUs. Intel clearly sought him out to work some of his magic on their problems.
 
we can all just hope that AMD will do even better with next generation...
first gen ryzen made intel to finally add more cores to mainstream processors ....
Amd and Intel finally competing with each other is a good thing for us

i would still like to see a good gpu from AMD in mid/high-end
 
That is true, I didn't mean to imply he did it single-handedly. But he was lead architect on both projects and is generally regarded as a very important person in the world of CPUs. Intel clearly sought him out to work some of his magic on their problems.

Fair enough dude. Sorry I came across wayy too strongly there.

I design ASIC's and electronics for a living, and it is a bit annoying when a manager takes the reward for your hard work and ingenuity ;).
 
Since my first PC (486 DX2 66Mhz) in 1995 when I was 16 up until now, I’ve always been an Intel man. Never had AMD.

I have had AMD powered servers at work, but for personal computing, always Intel. Until last month, I got a 1950X TR which was on sale.

In terms of sheer compute power and value for money, it will be a long time before I buy another Intel CPU. AMD have embarrassed Intel.

I won’t get rid of my 7700K, it’s a good CPU for what it is, over priced in hindsight but hey, lesson learned
 
Don’t forget that they have Jim Keller now. In three years he came up with both K12 and Zen and brings all that knowledge and experience to the drawing board.

3 years is a lifetime in the CPU industry. And that is best case scenario for 'Keller's Intel CPU'. In that time, Intel's server market share could go from 95% to 50%. This would mean AMD stealing billions in profits from Intel from that market sector.
 
Populus surveys had for the first time in as long as I recall a processor survey today, which seemed to be getting directly at the intel versus AMD question, asking about gens of x and y. If you would recommend etc.

Clearly Intel now want some basic market research to see if they are losing the popular thinking war.
 
That is true, I didn't mean to imply he did it single-handedly. But he was lead architect on both projects and is generally regarded as a very important person in the world of CPUs. Intel clearly sought him out to work some of his magic on their problems.

Apparently they need him to help them glue bits together. :D

Populus surveys had for the first time in as long as I recall a processor survey today, which seemed to be getting directly at the intel versus AMD question, asking about gens of x and y. If you would recommend etc.

Clearly Intel now want some basic market research to see if they are losing the popular thinking war.

Have you a link please.
 
3 years is a lifetime in the CPU industry. And that is best case scenario for 'Keller's Intel CPU'. In that time, Intel's server market share could go from 95% to 50%. This would mean AMD stealing billions in profits from Intel from that market sector.
A 50/50 market share would be great for everyone. My point was that it took him 3-years to architect two new microarchitectures for AMD. Perhaps it won't take so long to build on that success as much of research work has already been done. He just needs to apply the glue technology to Intel's silicon.
 
A 50/50 market share would be great for everyone. My point was that it took him 3-years to architect two new microarchitectures for AMD. Perhaps it won't take so long to build on that success as much of research work has already been done. He just needs to apply the glue technology to Intel's silicon.

Well I think they got the fella in the Intel canteen serving sausage and chips after his turn in making their great rivals hugely competitive again. Hopefully a long stint with Debs the veteran kitchen assistant will dampen his senses enough so that when he's released back in the wild by Intel he can't do them harm ever again. He's a one-man army is Keller.
 
Back
Top Bottom