• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The age of highly threaded games has arrived

Irrespective of the value of the R5 3600, I rather suspect that once the new consoles are out it won't age much better than the Intel 4C/8T have aged the last few years.

8C/16T really should be minimum even for gaming. And for people who like to do other things while gaming, 10 or 12 cores won't be OTT any more.

Well-threaded graphics engine is very good, but hope Zen2 consoles mean a lot more attention is paid to AI in games as well. Could just imagine the next open-world Bethesda CRPG using a more modern engine, but leaving the AI code on one or two central threads which don't scale.

I like how some are on purpose ignoring the results I posted in posts 37 and 45. The Ryzen 3 3300X is faster than a Core i7 7700K,and the Ryzen 5 3600/3600X are close to a Core i9 9900K.

damn you Martin. I was waiting to see if one of the resident cheerleaders would see the obvious issues (Many!) in the 2nd graph but you came and ruined it.

There was a guy on here who had his 3800x pretty tuned with hand tuned bdie, and fclk at 1900mhz. It wasn’t even close to my 9900k.

I can pretty much guarantee that 4.5 all core is well into voltage deg territory. I’d love for that reviewer to run a stability test like large avx2 with that setup.

Zen 3 is amd’s best chance to pull a notable lead. Until then don’t degrade your chips for scores.

You do realise that he just quoted one result and ignored the others which have popped up online.

In this game,a Ryzen 5 3600/3600X is almost the same as a Core i9 9900K. A Ryzen 3 3300X is faster than a Core i7 7700K:
https://www.youtube.com/watch?v=FP4A7jG_6Bw&feature=youtu.be&t=51

The Ryzen 5 3600/3600X looks impressive even against the highe end Ryzen CPUs too.

Just to remind you:
CykWguX.png


3200MHZ DDR4.

awd2y0H.png


2666MHZ DDR4
 
Last edited:
I like how some are on purpose ignoring the results I posted in posts 37 and 45. The Ryzen 3 3300X is faster than a Core i7 7700K,and the Ryzen 5 3600/3600X are close to a Core i9 9900K.

You do realise that he just quoted one result and ignored the others which have popped up online.
[/Spoiler]
You must realise that you posted the German and Russian charts AFTER my post so therefore it is impossible for me to have ignored them as I didn't see them until you posted them.
I just haven't had time to comment on them yet but will do so now.
You stated that in the German review the memory speed was as 3200Mhz but having read through it the only times I can find it referring to 3200Mhz is when it lists the GPU benchmarks spec which were done with the 3900XT
All benchmarks were run on an AMD Ryzen 9 3900XT (test) , which is operated with the standard settings . The Asus ROG Strix B550-E Gaming (BIOS 0802) with the B550 chipset was installed as mainboard, graphics cards can be controlled accordingly with PCIe 4.0. The CPU was cooled by a Noctua NH-D15S with a centrally installed 140 mm fan. 32 GB of memory (4 × 8 GB, single-rank, DDR4-3200, 14-14-14-32-1T) were available to the processor. A freshly installed Windows 10 2004 with all updates was installed on an NVMe M.2 SSD with PCIe 3.0, the same was true for AMD's current chipset drivers.

When it goes to the CPU benchmarks I can see no mention of what the specs were for any of the other CPU's so it is just conjecture and assumption that it was 3200Mhz for all the others. It is also worth noting that they concluded.
Unfortunately, further CPU tests could not be carried out due to the activation limit (copy protection) of the game. However, the tests show that Death Stranding with more than 4 CPU cores with activated SMT apparently cannot do much in practice. The results are the same for both the AMD and Intel CPUs with different numbers of cores. 6 cores bring a little boost, but that's about it.

As I understand it they seem to be saying there isn't much point to have more than 4 cores 8 threads for Death Standing, which would totally contradict the title of this thread.

Which brings me to the other point; before I put any credence in any reviews I will wait till they are in a language I fully understand as even with the help of Google translate the proper meaning can be lost.

As to the Russian review, this is just seems very poorly done. They didn't even use the X570 for the 3900X (using X470) and to run memory at 2666Mhz, especially for Ryzen is just amateurish and inexperienced. I wouldn't be using their review to draw any conclusions whether it favoured Intel or Ryzen as they are just so poorly done.

I will wait until some 'proper' in depth reviews in a language I can fully comprehend, with full test methodology and specifications arise. They may well show Ryzen in an even better light but until then I will point out how absurd charts like the 2nd chart are in the OP opening post, I'm sure we can agree on that.
 
Last edited:
You must realise that you posted the German and Russian charts AFTER my post so therefore it is impossible for me to have ignored them as I didn't see them until you posted them.
I just haven't had time to comment on them yet but will do so now.
You stated that in the German review the memory speed was as 3200Mhz but having read through it the only times I can find it referring to 3200Mhz is when it lists the GPU benchmarks spec which were done with the 3900XT


When it goes to the CPU benchmarks I can see no mention of what the specs were for any of the other CPU's so it is just conjecture and assumption that it was 3200Mhz for all the others. It is also worth noting that they concluded.

As I understand it they seem to be saying there isn't much point to have more than 4 cores 8 threads for Death Standing, which would totally contradict the title of this thread.

Which brings me to the other point; before I put any credence in any reviews I will wait till they are in a language I fully understand as even with the help of Google translate the proper meaning can be lost.

As to the Russian review, this is just seems very poorly done. They didn't even use the X570 for the 3900X (using X470) and to run memory at 2666Mhz, especially for Ryzen is just amateurish and inexperienced. I wouldn't be using their review to draw any conclusions whether it favoured Intel or Ryzen as they are just so poorly done.

I will wait until some 'proper' in depth reviews in a language I can fully comprehend, with full test methodology and specifications arise. They may well show Ryzen in an even better light but until then I will point out how absurd charts like the 2nd chart are in the OP opening post, I'm sure we can agree on that.

Sorry but both results confirm each other,and so does the YouTube result. In both reviews they show a Ryzen 5 3600/3600X is close to a Core i9 9900K. In all the reviews,Zen2 seems to have very good minimums in the game.Core i7 7700K is worse than a Ryzen 3 3300X. Core i9 10900K is basically equal to a Ryzen 9 3900X with same speed RAM(and using a worse motherboard),and when the Ryzen 9 3900X has some slight tweaks its slightly ahead.

You tried to say one result was not valid due to AMD having a RAM advantage. Now I pointed out a review where AMD is at a RAM disadvantage. I did this on purpose,as it puts AMD in a much worse situation. The Ryzen 9 3900X is barely behind a Core i9 10900K and suddenly you can't accept that either.

So despite you making a big deal of the RAM advantage,apparently it doesn't seem to hugely influence the result. Now you are trying more of the RAM speed stuff with the other test too and both will be run at the same speed as they did with other tests in the past. So what is going to be RAM makes a big difference for Ryzen in the game or not??

The fact is the reviews are showing Ryzen doing relatively well in the game. It is nothing to do with all this RAM speed stuff which you are pushing here.

Decima is a console exclusive engine...this is the first time its appeared in a PC game. Horizon Zero Dawn and Killzone are among the other games. Horizon Forbidden West is the latest game being made on the engine for the PS5 which uses Zen2 cores. Do you honestly think with less than a year to go for that game,they are using a different fork of the engine?? It's going to be one of the biggest titles for the PS5 in its first year.
 
Last edited:
The fact is all the reviews are proper,and they are all showing Ryzen is doing well in this game. That is 4 pieces of information you are now ignoring.
I am not ignoring anything, I'm sure Ryzen will do very well but the reviews are not proper, even the reviewers say that themselves in the German review. "Unfortunately, further CPU tests could not be carried out due to the activation limit"

You must also well know that running a Ryzen CPU at 2666Mhz is definitely NOT a proper review. I really do hope that Ryzen do brilliantly as I absolutely adore my 3900X but non of the reviews I've seen I'd class as proper and running an extremely overclocked CPU against another one at stock is just wrong, no matter which 'side' does it.

I look forward to some better done reviews and as I said in my last post they may well show Ryzen in an even better light.

What do you think the German reviewers mean by this?
Unfortunately, further CPU tests could not be carried out due to the activation limit (copy protection) of the game. However, the tests show that Death Stranding with more than 4 CPU cores with activated SMT apparently cannot do much in practice. The results are the same for both the AMD and Intel CPUs with different numbers of cores. 6 cores bring a little boost, but that's about it.
 
I am not ignoring anything, I'm sure Ryzen will do very well but the reviews are not proper, even the reviewers say that themselves in the German review. "Unfortunately, further CPU tests could not be carried out due to the activation limit"

You must also well know that running a Ryzen CPU at 2666Mhz is definitely NOT a proper review. I really do hope that Ryzen do brilliantly as I absolutely adore my 3900X but non of the reviews I've seen I'd class as proper and running an extremely overclocked CPU against another one at stock is just wrong, no matter which 'side' does it.

I look forward to some better done reviews and as I said in my last post they may well show Ryzen in an even better light.

What do you think the German reviewers mean by this?
Unfortunately, further CPU tests could not be carried out due to the activation limit (copy protection) of the game. However, the tests show that Death Stranding with more than 4 CPU cores with activated SMT apparently cannot do much in practice. The results are the same for both the AMD and Intel CPUs with different numbers of cores. 6 cores bring a little boost, but that's about it.

I am not ignoring anything, I'm sure Ryzen will do very well but the reviews are not proper, even the reviewers say that themselves in the German review. "Unfortunately, further CPU tests could not be carried out due to the activation limit"

You must also well know that running a Ryzen CPU at 2666Mhz is definitely NOT a proper review. I really do hope that Ryzen do brilliantly as I absolutely adore my 3900X but non of the reviews I've seen I'd class as proper and running an extremely overclocked CPU against another one at stock is just wrong, no matter which 'side' does it.

I look forward to some better done reviews and as I said in my last post they may well show Ryzen in an even better light.

What do you think the German reviewers mean by this?
Unfortunately, further CPU tests could not be carried out due to the activation limit (copy protection) of the game. However, the tests show that Death Stranding with more than 4 CPU cores with activated SMT apparently cannot do much in practice. The results are the same for both the AMD and Intel CPUs with different numbers of cores. 6 cores bring a little boost, but that's about it.

Sorry I re-edited my answer,so the wording is different now! :p

I was addressing your post about Intel loosing in the game.You started saying Intel was only loosing because the reviewer overclocked the AMD CPU with fast RAM,and the Intel CPU was at stock and on purpose being gimped. So I decided to see if this was the case. So I showed a result where the AMD CPU was running slow RAM on an X470,which meant it was even more gimped than the Intel CPU. The problem is even the result where AMD was "winning" was within margin of error,but it was winning more in the minimums(more on that later). Then look at the result with a gimped Ryzen 9 3900X,oh it's a few percent,ie,margin of error slower. They are hitting GPU limits in the tested scenes.

Then the other result with 3200MHZ RAM,had the Ryzen 9 3900X being quicker than a Core i9 9900K,and both had the same speed RAM. Yet,if you look at the other reviews,whether it was fast RAM,slow RAM,etc the Ryzen 9 3900X was always faster than the Core i9 9900K. Minimums were better. Now look at the thread scaling results,in the other two graphs in the OP,look at the 6C/12T results for Intel Core i9 10920X and AMD Ryzen 9 3950X,the AMD 6C/12T results are close to the Intel 8C/16T results. Look at the two reviews which tested the Ryzen 5 3600 and Ryzen 5 3600X,they are both close to the Core i9 9900K. The same goes for the simulated 4C/8T results on the Core i9 10920X and Ryzen 9 3950X,the Zen2 CPU is quicker. Look at the Ryzen 3 3300X is quicker than the Core i7 7700K result. If you want to test a game where RAM speed makes a difference,test Fallout 4,especially in settlements. That is probably one of the games which shows big differences.

Then look at the minimums - the Zen2 CPUs seem to do quite well compared to Intel at similar core counts. Look at some of the Zen results(two of the reviews list them),and they are relatively much worse at similar core counts. Looking at the information,this tells me this game is well optimised for Zen2 CPUs,and I think it might be the branch of the engine being developed for Horizon Forbidden West which is a PS5 title. The PS5 uses Zen2 cores - it might be quite possible Intel tries and improves performance with updates to this game,but it seems to do unusually well on Zen2(compared to Intel and earlier AMD CPUs).

We can agree to disagree about this,but that is how I am looking at the given information so far.
 
Last edited:
You guys are more prone to debating other people's works than doing your own. The single thread DXO thread was a great example of people not willing to test but willing to argue and try and misdirection a single core/thread focused discussion into multi core.

The last guy on here who was actually willing to test https://forums.overclockers.co.uk/threads/3800x-vs-9900k.18866559/page-10#post-33145998 "zx128k" sadly hasn't been on for some time. It was actually fun to compare notes, see what works on which platform and measure out the pros and cons on how to tune each platform. Imagine actually overclocking on a forum called "overclockers."

Outside of rare people like him, it seems most of the AMD community on here is more prone to defending other people's work with limited knowledge than doing it themselves and perhaps learning a few things along the way.
 
Maybe some others are prone to not understanding context either? ;)

MP and I were discussing DxO PhotoLab WRT to Ryzen performance as when I used it,I run a large job. Even on this forum,not many people actually talk about DxO,so that is how it all started. They were basically saying they had bugs when they tried batch processing,so switched to a single images,before export. So they wanted to see how it worked for THEIR workload. I told them I didn't see such bugs,but they wanted some results,so many of us provided them.Why not? I wanted to know how it performed with various CPUs in both cases too.

Yet in that DxO thread,it was also a great example of some people not having an appreciation about different workloads. People do understand photographers,tend to have different workflows right?? I have used DxO for years processing 100s of pictures in some jobs. PRIME NR is class leading.The batch test can run upto 8 photos in parallel IIRC,and that pushes up the number of cores used and core utilisation. It's called throughput.This is why review sites test it that way- only reviewers who actually are photographers would know of it as its a very specialist piece of software.

Then it was pointed out people made it sound like most people overclock. Many don't especially with the move to SFF systems,and modern boost algorithms. AMD tends to do much better out of the box,and Intel seems needs more tweaking,which again suits different kinds of enthusiasts. So not sure why stock results are something bad now,as so many photographers now use laptops,as they are usable on the move. There is room for both.

It reminds me of what this place was back in the Athlon/Athlon 64 days, you always had some who thought the benchmarks were biased against Intel or something,which is ironic reversal of fortunes really,considering how long that was on AMD's foot since those days. Intel could overclock more than AMD too,as long as u wanted to cool a furnace! Intel had HT! ;)

The worst thing is that this is literally just one,rather AMD biased game too,so it's rather weird it takes only one game for this response. I even tried to throw some of you a bone with Fallout 4 which is one of the most Intel biased games out there. Can't please some here! :p
 
Last edited:
:D

The thing is gaming at 4K, one will be GPU bottlenecked way before CPU even with a 6 core Ryzen 3600. So makes no difference to me personally. By the time I need more cores I will be on a 4900X anyway in a couple of years time. Will pick one up as people are selling to move on to a 5900X or some intel equivalent :D

That's because, obviously, we're playing games that are made with very weak CPUs in mind. No one will make 2 different games. :)

But, even using current older titles, you can still see improvements at normal resolutions and settings (1080p ultra), https://www.youtube.com/watch?v=Rutk9ErhKG4
It shows pretty well how adding more AI can put pressure on the CPU and even older FX CPUs can be good IF you give them work. Add on top advanced physics and higher core CPUs will be required.

How many games we currently have with advanced AI, big crowds like Hitman and AC:U, Red Faction type of physics? None.
 
Next gen games gonna have more cpu physics, more fluid, weather simulation, NPC AI etc - definitely put more load on the CPU
 
That's because, obviously, we're playing games that are made with very weak CPUs in mind. No one will make 2 different games. :)

But, even using current older titles, you can still see improvements at normal resolutions and settings (1080p ultra), https://www.youtube.com/watch?v=Rutk9ErhKG4
It shows pretty well how adding more AI can put pressure on the CPU and even older FX CPUs can be good IF you give them work. Add on top advanced physics and higher core CPUs will be required.

How many games we currently have with advanced AI, big crowds like Hitman and AC:U, Red Faction type of physics? None.
Let’s hope so. By the time they do I will have a 12 core 4900X anyways ;)
 
Some games can be bottlenecked at even 1080p due to using old engines were built for a different era.I mention Fallout 4 as it only uses 6 cores,but loads the first two a huge amount(the engine is not very well optimised). So what happens,is that due to the build system in the game,there is a huge amount of draw calls loaded onto one thread,and NPC AI also gets loaded onto another. I saw the same minimums at 1080p and 1440p in player created settlements,and dense parts of the map with a Xeon E3/Core i7 and Ryzen 5 with a GTX960,RX470 and a GTX1080. Even with modern AMD and Intel CPUs the same dips are seen,and it seems to optimised only upto Skylake uarch consumer CPUs. Zen is not optimised for and neither are any of the Intel HEDT CPUs which use a mesh bus(which have lower performance at stock than Zen!).

However,as some indicated before me,games will have to thread better,as you can't just expect games to load a few threads,as there is diminishing returns in single core performance. The reason we are not seeing things such as in Red Faction,etc is because the graphics load is probably much higher with modern CPUs,so when things are loaded onto only a few threads,its too much. This is why we seem to be stuck with the same AI models for over a decade,and have very static worlds in terms of destruction,environmental physics,etc.

The consoles having weak cores doesn't help too,but at the same time its meant companies have been force to thread much better to extract as much performance as possible.

Splitting work across more threads is harder to do,but has to be done. With the consoles having 8 modern desktop class CPU cores,people can't cling to expecting workloads to be just prioritised to a few threads due to some weird reasons,as this mentality is holding back PC gamers. In the end enthusiasts should be happy for games to be more multi-threaded as it actually gives us an excuse to buy faster CPUs. If not you could have just bought a Core i7 7700K years ago,and not upgrade for 10 years. This is realistically what happened if you bought yourself a Core i7 2600K and overclocked it to 5GHZ,you really didn't need to upgrade it for years.
 
Last edited:
The reason we are not seeing things such as in Red Faction,etc is because the graphics load is probably much higher with modern CPUs,so when things are loaded onto only a few threads,its too much. This is why we seem to be stuck with the same AI models for over a decade,and have very static worlds in terms of destruction,environmental physics,etc.

I'd say there are 2 main reasons:

1) they can get away with it
2) weak CPUs from the consoles.

You can multithread the code for AI and physics to such an extend that you can do both on the GPU and either way, CPU or GPU, you can have this alongside your main 3D thread. But, if they can get away from the gamers with less, why bother? :)
 
I'd say there are 2 main reasons:

1) they can get away with it
2) weak CPUs from the consoles.

You can multithread the code for AI and physics to such an extend that you can do both on the GPU and either way, CPU or GPU, you can have this alongside your main 3D thread. But, if they can get away from the gamers with less, why bother? :)

Well I agree with this,as I didn't explain well enough they are just sand papering over old engines,which were made for a few cores. So the problem is they want to not invest into proper engines to save money,so instead there is too much pushed onto these crap engines to handle. So if the engine is made to handle realistically a few threads,you end up bottlenecking the whole game and need to cut back on complexity. The problem is when you also have better graphics fighting for those limited CPU resources,its all too much. Using modern engines which can split the workloads up makes far more sense.....but involves spending more money.

But ironically the weak console CPU,have probably been the reason we started to see better multi-threading too,as they have progressively increased GPU processing power with the console refreshes,but had the same weak CPU. So the only way is to use more threads to maximise resource utilisation.

Also gamers do contribute to this,especially PC gamers who now settle for early access games with poor optimisations,then throw hardware at it. So there are a lot of PC exclusives with wonky optimisations too. So the developers save money,and the costs are transferred onto the gamer. This is why I try to avoid early access games.
 
Last edited:
Back
Top Bottom