• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The age of highly threaded games has arrived

I'd seen part of that earlier and from my understanding Intel are limiting this on all boards below Z490. So if you have a Z490 you'll still be able to run memory at much higher speeds though it is a bit of bummer for those with lower spec boards and that have faster memory.

The 2nd chart in the first post has nothing to do with this and is either just shoddy benchmarking or wilful deceit.

Thanks, I did wonder.
 
I don't pretend to understand CPU architecture to anywhere near the same level as some people here but I watched a LTT video (I know some don't like him but he explains things in a way I understand) which explained that Intel are intentionally limiting ram speeds at a BIOS level - I wonder is this is the reason why the benchmarks are the way they are? I find it fascinating and can't help but wonder if in some way they have to limit the memory speeds in some way in order to get the cpu clock speeds they achieve. Isn't the memory controller on the cpu? Could that fact be limiting the ram speeds possible in some way?
Yes, memory controller is integrated into CPU and its stressing with higher memory clocks produces more heat, which also affects surrounding chip.
Also same works other way with heat produced by CPU cores affecting memory controller.

Hence why Ryzen's memory overclocks should be also tested with CPU load.
Though in case of Zen2 Ryzen it's as much about InfinityFabric producing heat stressing memory controller than CPU cores on separate dies.
And especially in case of Samsung B-die it's very temperature sensitive and after reaching certain temp (around 54/55?) it just doesn't stay stable no matter what.
That's why also GPU load from game can shoot down memory stability.
 
I agree, most games benefit from high clock speed and single thread performance. Still.
Because Intel stagnated for years in "Four cores is high end" and current antique consoles had wimpy cores.
Hence instead of looking for new things to do with extra cores, console game developers were battling with "How the heck can this code be made to run on these crap cores?" problem.

With new consoles bringing strong 8 core/16 thread CPU as base line game developers have lot more insentive to think about ways to utilize lots of cores.
 
I'd seen part of that earlier and from my understanding Intel are limiting this on all boards below Z490. So if you have a Z490 you'll still be able to run memory at much higher speeds though it is a bit of bummer for those with lower spec boards and that have faster memory.

The 2nd chart in the first post has nothing to do with this and is either just shoddy benchmarking or wilful deceit.

I showed benchmarks with the RAM set to 2666MHZ with both platforms and Zen2 tends to loose more from slower RAM than Intel CPUs. The Core i5 10900K is barely faster than a Ryzen 9 3900X(probably the 2666MHZ RAM isn't helping here) and the Ryzen 7 CPUs are ahead of the Core i7.

The Decima engine until now was a console exclusive,ie,its made primarily to work on AMD CPUs. So I expect it's better optimised for them,especially since the PS5 uses Zen2 cores.

https://youtu.be/FP4A7jG_6Bw?t=51

Ryzen 3 3300X is a bit faster than a Core i7 7700K(3200MHZ DDR4). The Core i7 7700K is based on Skylake cores like the current generation of CPUs.

Also if you look at the OP,ie,charts 1 and 3,at similar thread count(24T) the AMD CPU is faster.

Edit!!

Another CPU test,but from Computerbase.de:

https://i.imgur.com/CykWguX.png

CykWguX.png


The Ryzen 5 3600 is close to a Core i9 9900K in the test,which is similar to what GameGPU also sees,as their Ryzen 5 3600X matches their Core i9 9900K.
 
Last edited:
amd current cpus in games as been shown proven are the shame as a 8700 non k. amd actually used shown this themselves in the lauch of the new AMD cpus currently out.

as for how many cores you need in games. across the board not just cherry picked games its x6. 6 cores only. 8 cores next. thats why the new consoles will have 8. any more than that arent needed for probably 3 - 5 years. some games may benefit. often its because those cherry picked games do cater for more cores or the real reality is they just often badly coded.

this is why these kinda of cherry picked debates with said two games out of 1 million mean nothing . unless its the one game you actually play all the time. which is then a benefit.

the real reality is if you look across the board of loads of gaming benchmarks over 8 cores often yields less performance. sometimes than 6. cause most games arent made for that many cores.

so you get the sweet spot of what you actually need for gaming at the time you getting your pc or...allow for how long you keeping it. so right now today and onwards for the next 3 years ...a x8 core cpu. anything else for gaming only is a waste.
 
Which is why i laugh at people recommending it for gaming over a 3600, just because it runs games already out with similar performance, it's going to fall off rapidly with games adopting heavy thread use more and become a bottleneck much faster.

I'd rather spend less and then upgrade faster.

If all I need now is a 3300 then I would get that. I personally run a 3600x.

My plan is to move to a 4600 or thereabouts when I think it's worthwhile doing so.
 
50128184693_e11b6a878a_b.jpg


Did anybody else notice this? Oh for some level playing field benchmarks. Intel pulls off this crap all too often by tipping the scales in their favour - actually they full on rig the scales, so I shouldn't be too surprised to see this (from Ryzen folks).

I've done a fair bit of memory overclocking but you have to be more than golden to get 3800C14 at 1T on Ryzen. To get that 1T C14 is not just binned luck, it is some top overclocking. The 4.5Ghz all core clock on a 3900X is almost as good.

There is some major effort gone into that 3900X overclock so why not do the same for the Intel CPU's? Or also show the 3900X at stock with the same memory speed or maybe standard 3200Mhz. Any memory that can do 3800Mhz C14 1T on Ryzen can easily do C15 4000Mhz+ on a Z490/Z390 and probably nearer to C15 4200Mhz. This alone can account for the 4% difference. Then also drop an all core clock on the Intel CPU's and boom, there goes the 2nd graph showing the 3900X as faster.

I'm fairly confident it will be a matter of time before Rzyen CPU's are faster in gaming than Intel across the board but this is not that, not even close.

damn you Martin. I was waiting to see if one of the resident cheerleaders would see the obvious issues (Many!) in the 2nd graph but you came and ruined it.

There was a guy on here who had his 3800x pretty tuned with hand tuned bdie, and fclk at 1900mhz. It wasn’t even close to my 9900k.

I can pretty much guarantee that 4.5 all core is well into voltage deg territory. I’d love for that reviewer to run a stability test like large avx2 with that setup.

Zen 3 is amd’s best chance to pull a notable lead. Until then don’t degrade your chips for scores.
 
damn you Martin. I was waiting to see if one of the resident cheerleaders would see the obvious issues (Many!) in the 2nd graph but you came and ruined it.

There was a guy on here who had his 3800x pretty tuned with hand tuned bdie, and fclk at 1900mhz. It wasn’t even close to my 9900k.

I can pretty much guarantee that 4.5 all core is well into voltage deg territory. I’d love for that reviewer to run a stability test like large avx2 with that setup.

Zen 3 is amd’s best chance to pull a notable lead. Until then don’t degrade your chips for scores.

It's not that hand tuned - the X570 Tomahawk provides a set of pre-defined memory options for "testing" one of which is 3733 14-14-14-34, all I did was up the voltage slightly and change 3800 with FCLK set to 1900 - and it just worked.

It's also worth saying that the 3800x and the 32gb of Ram (2 lots of https://www.overclockers.co.uk/patr...dual-channel-kit-pvs416g440c9k-my-103-pa.html ) probably cost less than your 9900k by itself.. Value for money wise Ryzen is the better choice..
 
It's not that hand tuned - the X570 Tomahawk provides a set of pre-defined memory options for "testing" one of which is 3733 14-14-14-34, all I did was up the voltage slightly and change 3800 with FCLK set to 1900 - and it just worked.

It's also worth saying that the 3800x and the 32gb of Ram (2 lots of https://www.overclockers.co.uk/patr...dual-channel-kit-pvs416g440c9k-my-103-pa.html ) probably cost less than your 9900k by itself.. Value for money wise Ryzen is the better choice..

I've had my chip since launch day so any premium I paid let me have the best performing 8/16 cpu for 2 years running (and still is). For me, that's completely worth it. The 3800x came out last year and still falls behind. The 2700x which was the competition at the time was junk.
 
I've had my chip since launch day so any premium I paid let me have the best performing 8/16 cpu for 2 years running (and still is). For me, that's completely worth it. The 3800x came out last year and still falls behind. The 2700x which was the competition at the time was junk.

I got my 1800X 18 months before that. The 9900K is barely 18 months old. Looks like you bought at the worst possible time.
 
Irrespective of the value of the R5 3600, I rather suspect that once the new consoles are out it won't age much better than the Intel 4C/8T have aged the last few years.

8C/16T really should be minimum even for gaming. And for people who like to do other things while gaming, 10 or 12 cores won't be OTT any more.

Well-threaded graphics engine is very good, but hope Zen2 consoles mean a lot more attention is paid to AI in games as well. Could just imagine the next open-world Bethesda CRPG using a more modern engine, but leaving the AI code on one or two central threads which don't scale.
 
Irrespective of the value of the R5 3600, I rather suspect that once the new consoles are out it won't age much better than the Intel 4C/8T have aged the last few years.

8C/16T really should be minimum even for gaming. And for people who like to do other things while gaming, 10 or 12 cores won't be OTT any more.

Well-threaded graphics engine is very good, but hope Zen2 consoles mean a lot more attention is paid to AI in games as well. Could just imagine the next open-world Bethesda CRPG using a more modern engine, but leaving the AI code on one or two central threads which don't scale.

4 cores only started to struggle around 2 years ago.

I gamed on a 7600k fine until then and a 2500k before that.

To say 6 cores is already dead is ignorant.

My 3600x hits 25% cpu usage whilst playing csgo.

Other more demanding games then sure but it still doesn't break much of a sweat.

Same goes for RAM 16GB is currently overkill too. I regularly use only 5GB and the highest I've ever seen is 11-12GB usage. That's 25-50% sitting there doing nothing most of the time.

A 6 core 12 thread and 16GB ram gaming rig will last 10 years easy. There will be some games badly programmed that will struggle but the vast majority will run fine.
 
4 cores only started to struggle around 2 years ago.

I gamed on a 7600k fine until then and a 2500k before that.

To say 6 cores is already dead is ignorant.

My 3600x hits 25% cpu usage whilst playing csgo.

Other more demanding games then sure but it still doesn't break much of a sweat.

Same goes for RAM 16GB is currently overkill too. I regularly use only 5GB and the highest I've ever seen is 11-12GB usage. That's 25-50% sitting there doing nothing most of the time.

A 6 core 12 thread and 16GB ram gaming rig will last 10 years easy. There will be some games badly programmed that will struggle but the vast majority will run fine.

I did say "that once the new consoles are out". Twice the memory and what in terms of CPU? Jaguar vs Zen is over x10 in multi-core and x5 in single core. So, that's why say 6C/12T isn't going to last that long.

Also, I've been playing heavily modded TES games (wabbajack finally makes 100s of mods without crashing easy) and 16GB is not enough. My first play with Living Skyrim was painful until I realised that my only VM was on the HDD. Moving it to the NVMe drive sorted but I've been considering getting another 8GB of DDR3 to make my old i5 last a bit longer.
 
I did say "that once the new consoles are out". Twice the memory and what in terms of CPU? Jaguar vs Zen is over x10 in multi-core and x5 in single core. So, that's why say 6C/12T isn't going to last that long.

Also, I've been playing heavily modded TES games (wabbajack finally makes 100s of mods without crashing easy) and 16GB is not enough. My first play with Living Skyrim was painful until I realised that my only VM was on the HDD. Moving it to the NVMe drive sorted but I've been considering getting another 8GB of DDR3 to make my old i5 last a bit longer.

6C/12T will last many years still, as mainstream games continue to be GPU bottlenecked, not CPU bottlenecked.

I wouldn't buy a CPU with less than 8C16T today, but any existing 6 core Intel CPU (aside from the x58 xeons, so 8700k or better) will be perfect for many, many years.
 
They're certainly not the full fat 3700X desktop parts, that's for sure...either in terms of clock speeds or Cache amounts.
Clock speeds are very high compared to previous consoles.
And Xbox's CPU is matching non-boost clocks of desktop Ryzens.
(fixed clocks needed for best optimizing of code by game developers)

As for cache desktop Matisse needs lots of cache to compensate for memory latency increasing effect of chiplet design with separate IO die.
Renoir is closer to what consoles have and as monolithic chip with integrated memory controller (that's why AMD calls it as unified memory controller in Matisse) has lower memory latency and isn't as much cache hit rate reliant.
 
They're certainly not the full fat 3700X desktop parts, that's for sure...either in terms of clock speeds or Cache amounts.

If I have to take a bet right now and stick with it, I would say the CPU in the consoles is not a 3700x, they are basically downclocked 4800H parts, but with a Navi 2 APU instead of Vega. Which means it has 12mb L3 cache
 
Back
Top Bottom