• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

New Zen3D Parts Rumour

Cries in Zen 2 that doesn't even hit its boost clock of 4.2

Zen 4 could be the definitive chip if it really hits 5.5ghz, especially with onboard graphics. I'd take a £200 7600 and £100 mobo with upgradability over a £300 5600X3D. But rumours of a 6ghz Intel are abound. Especially if I'm leaving AM4 and DDR4, so I could go with either team next gen.

its ok mine is the same :( I literally cant keep my 3960x cool enough with my AIO to allow it to hit 4.5 instead it runs all core 4.2 base and boosts up to something like 4.4 on a few cores.
 
Yes very quite likely at least per Moore's Law is Dead on YouTube: https://www.youtube.com/watch?v=lmKAHol7yV0&t=744s
I enjoy MILD's content and, contrary to some opinions around these parts, he's usually fairly (if not very) accurate on a lot of his leaks. But in this instance I just don't see AMD releasing X3D SKUs based on 6 core chiplets if what me and Dg834man discussed about TSMC slapping cache on after the chiplet has been verified.

Unless there is a massive overstock of 6 core chiplets that can't fuel 5600, 5600X, 5900X and mid-range EPYC parts, AMD will divert chiplets away for cache slapping.

The only way I see a 5600X3D happening is if AMD want to take the 6 core gaming crown away from 13th gen Intel.
 
But in this instance I just don't see AMD releasing X3D SKUs based on 6 core chiplets if what me and Dg834man discussed about TSMC slapping cache on after the chiplet has been verified

You have missed a critical point, they could have completed packages with some faulty cache attached to them, after all there will still be defects in the cache silicon as well.

If they wanted to do a 6 core part with an extra 48MB L3 cache that might be possible, and it would be using waste. They could offer it at £249-269 and make the 12600F/KF redundant to only gamers.
 
Cries in Zen 2 that doesn't even hit its boost clock of 4.2

Zen 4 could be the definitive chip if it really hits 5.5ghz, especially with onboard graphics. I'd take a £200 7600 and £100 mobo with upgradability over a £300 5600X3D. But rumours of a 6ghz Intel are abound. Especially if I'm leaving AM4 and DDR4, so I could go with either team next gen.


I do not give a crap about onboard graphics. I would rather have them ditch that useless crap for a bit better overclocking or even push up release date and clock speed and performance potential. People who buy these chips use a discrete video card anyways and all you need low end cheap spare PCIe card to troubleshoot.
 
I do not give a crap about onboard graphics. I would rather have them ditch that useless crap for a bit better overclocking or even push up release date and clock speed and performance potential. People who buy these chips use a discrete video card anyways and all you need low end cheap spare PCIe card to troubleshoot.
I dunno, it would've come in handy during the gpu drought, between selling my old card and buying a new one. And it's on the IO die, so the CPU logic isn't affected by it.
 
A 5600X3D is definitely tempting, especially since I've been mulling over replacing a 3600 with a 5600X. HWUnboxed showed that the difference wasn't much. But at £300, it's no longer a value proposition. Honestly, I might just skip a couple gens. Especially if Zen 4 is just a die-shrunk Zen 3 line rumours say.

People should defiantly look at more than one reviewer, Steve Walton appears to have his own agenda, he complained bitterly about the price of the 5600X vs the 3600, which by its self is fine.

But two things:
One, he completely ignored the 10700K at $80 more expensive, slower in games and similar productivity performance, as if AMD are just competing with themselves, and Intel? Well best not upset Intel.
Two, his benchmark to make that point was completely contrived, he made sure to use a lower end GPU to completely bottleneck the 5600X to 3600 performance.

Below is his slide and Steve Burke's slide, same game, same benchmark, notice on Steve Burke's slide the Ryzen 3600 scores 220 FPS, on Steve Walton's slide the 3600 scored 212, with in 5%, the same, Steve Walton also has the 5600X at 228 FPS, about 10% better, now look at Steve Burke's slide, 320 FPS, to save you doing the maths the 5600X was 45% faster. Its at least as fast as a 10900K FFS.

Don't trust anything Steve Walton puts on his channel.

bKycQcc.png


YbkeDDn.png
 
I dunno, it would've come in handy during the gpu drought, between selling my old card and buying a new one. And it's on the IO die, so the CPU logic isn't affected by it.

I think it is fine without integrated GPU even during shortage, Integrated GPU is crap and worse than low tier video cards. Even during GPU shortage you could still buy basic low end card at least as powerful if not more powerful than integrated GPUs.

And it being in IO die is potentially a nuisance when they could have used more of die to spread CPU logic out for better thermals and cooling and potentially better performance elsewhere. I hate iGPUs as such.

Better to put iGPUs on a separate CPU line.
 
Last edited:
A simple iGPU is useful when diagnosing hardware issues, it also means you can still use your PC if the GPU is away for RMA or something.
 
People should defiantly look at more than one reviewer, Steve Walton appears to have his own agenda, he complained bitterly about the price of the 5600X vs the 3600, which by its self is fine.

But two things:
One, he completely ignored the 10700K at $80 more expensive, slower in games and similar productivity performance, as if AMD are just competing with themselves, and Intel? Well best not upset Intel.
Two, his benchmark to make that point was completely contrived, he made sure to use a lower end GPU to completely bottleneck the 5600X to 3600 performance.

Below is his slide and Steve Burke's slide, same game, same benchmark, notice on Steve Burke's slide the Ryzen 3600 scores 220 FPS, on Steve Walton's slide the 3600 scored 212, with in 5%, the same, Steve Walton also has the 5600X at 228 FPS, about 10% better, now look at Steve Burke's slide, 320 FPS, to save you doing the maths the 5600X was 45% faster. Its at least as fast as a 10900K FFS.

Don't trust anything Steve Walton puts on his channel.

bKycQcc.png


YbkeDDn.png
One is ultra the other is high also what GPU and memory was used by HWunboxed?
 
One is ultra the other is high also what GPU and memory was used by HWunboxed?
I was thinking that when I first saw it, but it doesn't explain the discrepancy between the 5600x scores. The 3600 ones are close, but the 5600 is 100fps ahead from GN. Don't know who to trust anymore.
 
I think it is fine without integrated GPU even during shortage
Business OEMs would strongly disagree

Integrated GPU is crap and worse than low tier video cards
Literally not the point

And it being in IO die is potentially a nuisance when they could have used more of die to spread CPU logic out for better thermals and cooling and potentially better performance elsewhere.
Yeah, that's not how it works.

I hate iGPUs as such.
Clearly.

Let's clarify this, shall we? The GPU portion of the standard Ryzen 7000 CPU is a nothing burger. It's a handful of tiny CUs dropped onto the IO die. They won't get hot, they won't draw much power, they won't affect the CPU's CPU performance in any way, because all that is done by the chiplets.

You may be happy to get any old low-end dGPU to drop in your system if needed, but that approach is significantly more expensive than Intel for business OEMs. Finally, AMD will have a CPU line that can be dropped into those business crap boxes that sell by the bazillions, and that Intel have had a stranglehold for a thousand years.

Spit vitriol all you want about integrated graphics, frankly you're just portraying yourself as a bit of a oddball, but for those of us who want the computational power of Zen and just need something basic to drive a couple of monitors, for the millions and millions of basic office machines, Ryzen 7000 getting basic video out is great and a long damn time coming.
 
I was thinking that when I first saw it, but it doesn't explain the discrepancy between the 5600x scores. The 3600 ones are close, but the 5600 is 100fps ahead from GN. Don't know who to trust anymore.
I'm sure it's accurate it's just the 3600 is at its fps limit while the 5600X can push more when settings are lowered or a stronger card is used.

It's good to show comparisons like this else people who are using low to mid range cards or those gaming on high end cards above 1080p assume that spending £300+ quid on a new CPU is going to give them 100 more fps.
 
I was thinking that when I first saw it, but it doesn't explain the discrepancy between the 5600x scores. The 3600 ones are close, but the 5600 is 100fps ahead from GN. Don't know who to trust anymore.

Honestly i think Steve Burke is far more trustworthy, Gamers Nexus.

Steve Walton is someone who calls himself an "Influencer" and he's taking that term quite literally.
I don't think he deliberately ignored the 10700K at $379 at the time, it just didn't enter his head that Intel was over charging so much so it gave AMD a huge gap to slot themselves in to and still 'in reality' look like the far better choice, and sure enough AMD sold the 5600X in huge numbers at $299, far far more than Intel sold... not anything, but everything, at times more 5600X's were sold than Intel sold CPU's.

In his mind he's acting for consumers but like most activist types he's nothing like as clever as he thinks he is, he does quite a lot of this sort of crap because he thinks he has influence, he doesn't and its all quite transparent if you look, actually look, many people do and call him out on his crap, in turn to which he responds "oh AMD fanboys"

The problem is he thinks AMD's job is to be the budged brand, to be the Skoda to the Intel Audi, just to make slightly naff but that's fine because they are cheap CPU's, like in the good'ol days of the Ryzen 1600, that's not who AMD want to be and it makes him angry, so much so that he harassed AMD's CEO on her own twitter feed over something he had deliberately taken out of context so that he could have a go at her.
 
Business OEMs would strongly disagree


Literally not the point


Yeah, that's not how it works.


Clearly.

Let's clarify this, shall we? The GPU portion of the standard Ryzen 7000 CPU is a nothing burger. It's a handful of tiny CUs dropped onto the IO die. They won't get hot, they won't draw much power, they won't affect the CPU's CPU performance in any way, because all that is done by the chiplets.

You may be happy to get any old low-end dGPU to drop in your system if needed, but that approach is significantly more expensive than Intel for business OEMs. Finally, AMD will have a CPU line that can be dropped into those business crap boxes that sell by the bazillions, and that Intel have had a stranglehold for a thousand years.

Spit vitriol all you want about integrated graphics, frankly you're just portraying yourself as a bit of a oddball, but for those of us who want the computational power of Zen and just need something basic to drive a couple of monitors, for the millions and millions of basic office machines, Ryzen 7000 getting basic video out is great and a long damn time coming.


You are probably right. Will this help AMD get into more OEMs than before?? Hopefully and it lessons Intel's grip with all OEMs. Since AMD is now the better option as they have options for more than 8 strong cores in their CPUs while their P cores only have a 12% IPC deficit at same clock speed compared to Intel's while also consuming much less energy which is important for OEM PCs. And with Ryzen 7000 series compared to Raptor Lake, that will not change.

Ad you say it is a nothing Burger and will not get hot nor affect CPU perf in any way?? Is that less so than Intel. Cause even with iGPU shut off in Intel, I see it still gets much hotter when CPU under load just it being there per HWInfo64?? Is the one that will be in Ryzen 7000 series going to be less so than Intel's?? There is a reason why Intel KF series CPUs seem to overclock slightly better per Silicon lottery.

Edit just researched and the Intel UHD 770 that comes in 12th Gen CPUs is even 2% weaker than a 2007-2008 GTX 9800. Intel Iris XE appears to be their stronger integrated graphics and even that is bested pretty bad by a GTX 1050 and even bested by a GTX 580. Though the Iris XE beats a GTX 280 and lower. There was a massive leap in high end cards from NVIDIA TX 280 series to the 400 series and even a big leap to the 500 series as the highest end 500 series still smashes any iGPU and 500 series will be 12 years old this Fall having been released Fall 2010.
 
Last edited:
Ad you say it is a nothing Burger and will not get hot nor affect CPU perf in any way?? Is that less so than Intel. Cause even with iGPU shut off in Intel, I see it still gets much hotter when CPU under load just it being there per HWInfo64??
I was referring specifically to the graphics CUs in the IO die. They won't (shouldn't) add any meaningful heat or power draw to the CPU package, because they're not designed, nor intended, to be proper, performant GPU cores. That's what the G series APUs are for.
 
All this hate for iGPU plus a total obsession with clock speeds... I frankly couldn't care less if a CPU hits 8GHz (like the original P4 hype), but rather what is important is how it performs. The obvious example is Apple''s ARM chips vs all the other ARM SOCs: low(ish) clocks but far better performance. And a "free" iGPU is not only useful for diagnosing things, it is also often a better place to plug a secondary (non-gaming) monitor into to prevent the main dGPU from clocking the memory clocks too high etc.

As for binning, is the Wiki's entries for Milan-X accurate?
1xwmo2s.png

Since we suspect that the yields on the main CCD is really high (certainly above 90%), then all but the top Milan-X are using plenty of good part where AMD have disabled cores. For the 7373X the are using only 2 out of the 8 cores per CCD which seems wasteful. No Milan-X parts are using 6 cores per CCX yet.

Not exactly this means for potential more consumer 3D parts though, and we don't really know what stacking or the cache dies do for overall yields.
 
Back
Top Bottom