• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** AMD "Zen 4" thread (inc AM5/APU discussion) ***

It's possible the difference between "... Yes, it'll drive a desktop ok..." to "it's ok for light/medium gaming at 1080p".

Work machines Vs moderate home use, is my thinking.

I don't see how this is correct.

The serious work machines require serious discrete graphics cards, while most offices use no more than something like the Ryzen 5 2500U.
Moderate home use is also served by 7-watt, max 15-watt configurations.
 
AMD's original plan with Fusion was to use the graphics shaders instead of the classic floating point units in order to accelerate the apps and make it a true heterogenous computing system.
Today, I have no idea why they will waste their so important performance advantage.
 
After Warhol - AMD looks like it will be in a spot of bother me thinks.

I also think that AMD will have serious problems with Ryzen 7000 and will start losing competitive advantages shifting the die area for functions that are completely irrelevant.

First, AMD has already got APUs like Cezzane which could serve the OEMs just ok, if there is a true and real interest from these OEMs.
Second, the OEMs don't care about AMD's products, they will be buying Intel no matter what.

These are just fake excuses to drive AMD in the wrong direction.

OEMs won't agree with you and if AMD want to get decent market share the OEMs are their masters.

This is just wrong.

AMD cannot convince the OEMs to use superb 16-core CPUs instead of slow, hot and expensive 8-core Intel CPUs.
 
And then another year to actually be able to have your orders filled.

This is untrue, isn't it. The current CPU supplies are way higher than what the market needs, hence all the retailers offer as many Ryzens as you would wish.

It remains a mystery what AMD is so not serious and completely fails with the GPUs supplies, though.

For the wattage, is it possible that the AM5 and the implementation Zen4 rumours are combining somehow?
Designing AM5 high-end to be 170W, or even making a 24 core monster etc. and expecting certain boards to handle that, doesn't necessarily mean that 8 core or even 12 core will be anywhere close to that.

170-watt is perhaps Threadripper-class or fake information.
 
Looks like the CCD remains 8 core, but the max EPYC SP5 version will use 12 instead of 8 of them so 96 cores up from 64 before.
CCD is meant to be around 72mm² at 5nm. So almost the same size but obviously 5nm is denser. (If those nm figures meant anything 7^2 is 49 and 5^2 is 25, so nearly twice as dense. IF those nm figures mean anything.)

...
With up to DDR5-5200.


AMD SP5 Platform, EPYC Genoa CPUs & Zen 4 Core Detailed In Gigabyte's Leaked Documents (wccftech.com)

And some new Threadrippers, maybe for another thread.
AMD Ryzen Threadripper 5000 CPUs Leaked: Up To 64 Zen 3 Cores, 280W TDP, TRX40 & WRX80 SKUs Detailed (wccftech.com)


It could have been better with more cores but it looks like AMD will focus on the margins and stagnation..
 
The interlink Infinity Fabric operates at the same frequency as the DDR, so obviously it will get quicker.

The thing is that it will take a while until AMD gets that memory subsystem work as intended, historically AMD has always had problems in the implementation in a correct way of the new memory standards.
 
The wait is killing me already. Yes i am mad and am going to try and wait for Zen 4 before changing from i5-8600K.

Err, get the Ryzen 9 5900X today and upgrade in the next 5 years.
You don't need to wait till the end of 2022 to get the first iteration of the new AM5 platform.
 
This is a significant change, AMD have never ever had a CPU with an iGPU by design.

Llano and Cezanne, and Renoir say hello.

I think AMD will begin to lose sales because Alder Lake will be more competitive, while AMD itself will start losing its performance advantage retargeting and wasting transistors for functions which are not needed.

The APUs are always slower.
 
The APUs so far have been monolithic dies, which leads to things like lower cache, yes they're often slower although the margins are also pretty small. 5600X vs 5600G is pretty close really. The main point though is that Zen4 will never be monolithic.

More transistors sure, almost certainly in the IO Die that has no real effect on CPU speed and is due a die shrink anyway so not really going to change much.

The skies not falling down just because they add a few GPU cores, overreaction much...

Instead of making one large IO die, they could have offered chiplets with more cores.
The integrated graphics part WON'T be small.

Actually, if you look at the die shots, you will see that it takes 40-50% of the die area of the monolithic chip!
 
Not sure it makes much (business) sense to have 10/12/16 core chiplets, they already provide 6 to 64 cores with an 8-core chiplet, and rumours are they'll bump that to 96 cores maximum just by extra chiplets. Increasing the size of the chiplet makes it unfeasible to sell the lower chips, if you've got a 12 core chipset you're not going to want to fuse off half of it to sell a 6 core chip... And whilst as a consumer it'd be great if the low end bumped up to 8 or 10 cores there's no 'need' for it and it just increases costs/decreases the cpus they can make...

Hell, you even are comparing them to Intel and the competitiveness with Alder Lake, Intel aren't due to have more than 8 'P-Cores' for several generations.

As for die shots, that's not what this shows: https://wccftech.com/amd-ryzen-5000g-cezanne-apu-first-high-res-die-shots-10-7-billion-transistors/ that's not 40% of the die area...

Use the Intel die shots, this above doesn't provide much information on the parts because there are grey zones with unknown functions.

AMD has to offer chiplets with more cores, that is how the die shrinks has always worked, look at the graphics cards, each generation gets double the shaders count.

Going from N7 to N5 is exactly this. Make the chiplet with more cores and give the low-end more cores - we need to go up from the primitive 4-core configurations.
 
Why? For 90% of the PC market (i.e. excluding gamers, and "professionals" with specific software/hardware needs) even 4 cores is enough (e.g. for web browsing / facebook / youtube etc)

Better to further improve IPC and other aspects of the chip iteratively, and then gradually increase core counts over generations. It wasn't long ago that "make moar corez" was meme worthy

We have ever growing CPU power demands from 4K and soon 8K. These 4-core chips are already struggling and if we are limiting to only 4 cores, we will never see significant technological leaps to better user experiences.

Casual gamers are also a large part of the market.
That is the market segment that plays mostly outdated titles...
But they won't forever stay playing only those.


This is my load running a 4K YouTube video:



I don't think it is normal.
 
Last edited:
So a 4 year old, 'middle of the road', mobile processor running a Zen2 on 14nm is comparable to Zen4 how?

Most people don't have even that.
And you are claiming that progress is not needed or that slow progress with offering of no more cores is best.
 
There is a large enough group of people who DON'T need the crappy integrated graphics. Period.

If AMD was good, it could have just shrunk the already 8-core Cezanne to N5 process and offer it to the masses, while leaving the original chiplets offers to the high-end market segment that doesn't need the crappy integrated graphics!

Offer more cores to the same group of people who obviously need more cores.
 
Back
Top Bottom