• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: Ryzen 7950X3D, 7900X3D, 7800X3D

Will you be purchasing the 7800X3D on the 6th?


  • Total voters
    191
  • Poll closed .

Ok, good video.

7700X: 4800MT/s vs 6000MT/s +20% performance.
13900K: 4800MT/s vs 6000MT/s +9% performance.

Fair enough, 9% vs 20% scaling at the same memory speed i don't think is anything like extreme. There is a difference, sure, but i still think its made a bigger issue out of than it actually is.
 
Last edited:
This very much explains some of the really bad 7700x results, for example that german site with miles morales, where the 7700x scores massively worse than intel, it is using 5200mhz probably C40. The AMD would still lose in that test but probably nowhere near as much if using proper memory.
 
This very much explains some of the really bad 7700x results, for example that german site with miles morales, where the 7700x scores massively worse than intel, it is using 5200mhz probably C40. The AMD would still lose in that test but probably nowhere near as much if using proper memory.

I would have to see them again, from what i remember of slides that have been posted here those sites claim the 7950X to be 40 to 50% down vs the 13900K, compared to 10% down on other sites.

What Steve's video shows is nothing like that extreme.
 
Last edited:
Yes I also saw a youtube video of the 5800x3d vs 13700k on miles morales and while the 5800x3d was worse, it was nowhere near as bad as that german review. Hopefully the 7800x3d can get those results to the point they are not a problem and also beat the 13700k in other results where the 7700x is already as good.
 
Just for reference with both CPU's at 4800MT/s CL40 the 13900K is 21% faster in Spiderman, not 40 / 50%.
At 6000MT\s CL30 the 13900K is 9% faster, a 12 percentage point difference in scaling.

Overall in those 7 games again at 4800MT/s the 13900K is 17% faster. 5% faster at 6000MT/s CL30.
 
Out of curiosity i've had a look to see how "Incredibly bad" Bulldozer / Vishera actually was, i don't remember it being quite as bad as the look these two German sites seem to be going for.

Turns out it was not as bad as that, good ol' days on one game per page.... there are several more games on several more pages...


With that we used to say "Don't buy AMD, just buy Intel" even as they were 2X more expensive, how much do Intel want those days back?
 
Last edited:
Just for reference with both CPU's at 4800MT/s CL40 the 13900K is 21% faster in Spiderman, not 40 / 50%.
At 6000MT\s CL30 the 13900K is 9% faster, a 12 percentage point difference in scaling.

Overall in those 7 games again at 4800MT/s the 13900K is 17% faster. 5% faster at 6000MT/s CL30.

So there is a variance in performance between games like with Intel favouring cyberpunk and AMD favouring Horizon Zero Dawn and also a significant variance in performance due to memory. No wonder there is a lot of conflicting information!
 
So there is a variance in performance between games like with Intel favouring cyberpunk and AMD favouring Horizon Zero Dawn and also a significant variance in performance due to memory. No wonder there is a lot of conflicting information!
Absolutely, it would be nice if things were very simple and when the X3D chips come out just watch out for the endless cries of "But you can overclock the 13900K and its memory to over 9000 Mega Ma' Hurz!!!!" :D
 
Last edited:
Don't know if this is correct but because of the extra cache in L3 that the X3D will have, this will mitigate the performance variations when using slower RAM.

So i suspect the 20% variations HUB showed on the X Ryzen parts will be much closer on the X3D parts.
 
Not sure about that I reckon the X3d will still be obviously better with 6000 C30 compared to something like 5200 C40.

I would be comparing AMD at 6000 C30 to Intel at 7200, gives a good idea of what they are like with optimal memory.

These reviews that are using 5200 C40 are completely stupid tbh. Anyone who is getting a good CPU like the 7700x or the X3d is going to get 6000 C30-C32, although some people still seem to think "it doesnt make any difference".
 
Absolutely, it would be nice if things were very simple and when the X3D chips come out just watch out for the endless cries of "But you can overclock the 13900K and its memory to over 9000 Mega Ma' Hurz!!!!" :D

From what I have seen on intel anything past 7200 is diminishing returns with performance. Could be argued that anything past about 6400 is not a big gain after that. If I went for intel I would get 7200, but I am hoping the x3d will be better than intel because I would rather get AMD due to the better AM5 platform and potential Zen 5 / 6 on the same socket, whereas intel is a dead end socket.
 
Absolutely, it would be nice if things were very simple and when the X3D chips come out just watch out for the endless cries of "But you can overclock the 13900K and its memory to over 9000 Mega Ma' Hurz!!!!" :D
This is exactly what we are hearing about amd cpus though so because they are getting manhandled by intel. Have you paid attention to the last 3 pages? :D
 
Absolutely, it would be nice if things were very simple and when the X3D chips come out just watch out for the endless cries of "But you can overclock the 13900K and its memory to over 9000 Mega Ma' Hurz!!!!" :D
Why in this Ryzen 7950X3D, 7900X3D, 7800X3D have we got to be talking about the 13900K when we have no comparison figures! :confused: You're just as bad as Bencher; both of you should probably be thread banned!
 
These reviews that are using 5200 C40 are completely stupid tbh. Anyone who is getting a good CPU like the 7700x or the X3d is going to get 6000 C30-C32, although some people still seem to think "it doesnt make any difference".
From a reviewing perspective, I agree, but it probably doesn't make a difference for us folk on lesser GPUs and still driving higher fidelity since we'll be accepting that 60 fps is about the best we'll see. It is easy to forget that these benchmarks are made to be edge cases so that only the CPU/RAM combo is the limiting factor, most real setups will be GPU limited. Sure, if you're rocking the latest and greatest every generation then it makes little sense in skimping out on RAM, especially considering it is a relatively small fry in comparison to the cost of a £1000+ GPU, but balance of performance is a thing for those building/upgrading on value.
 
Last edited:
I'd really like to stick with Intel as my Sandybridge has been such a champ over the years, and I actually had a problem with an AMD Athlon back in the day. But PDX games really like this V cache stuff on the 58003DX, so quite tempted by the 79503D.
 
Last edited:
7900X3D AOTS Bench leak.. Take with pinch of salt.

The CPU scored an average of 9000 in the Crazy 1080p preset, an 87.5% gaming performance uplift compared to the non-3D V-Cache version of the Ryzen 9 7900X,

How much faster is the 5800X3D vs the 5800X in this?
 
I'm thinking theres going to be some even more wild and odd numbers regarding benchmark leaks than usual untill offical reviews arrive as there would have to be a Windows update to ensure the right CCD(s) and cores get used in games due to the hybrid nature of the 7900X3D and 7950X3D.

The 7800X3D should not have this issue though.
 
Last edited:
Ok, i remembered this video, +90% in Ashes is not necessarily fake or unusual, the cache on these things benefits (Nerd) games that have huge amounts of small FP ops, i quote Windle "massively"
For example in Factorio the 5800X3D completed 350 actions, his best Alderlake system did 245, the none 3D Zen 3 chip a little less than that...

 
Back
Top Bottom