• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Zen 2 (Ryzen 3000) - *** NO COMPETITOR HINTING ***

I will and am considering AMD. ARMA 3 is about as tough a single threaded test as there is. I don't understand why AMD's CPU topology is an issue.
Arma 3 is a really popular game, came out six years ago (enough time to build a CPU that can get good performance out of it) and exemplifies the difference between turgid console ports and PC gaming.

All I expect is parity rather than a late to the party approximation but 10-15% slower. Racing sims, Flight sims, Arma 3 are the types of game that demand the most out of CPUs and if AMD
can't cut it then I'm not interested. I'm GPU limited in everything else at 4K.
I think there's been a patch since that video but a difference of 35fps to 55fps at 4K in Arma 3 means more to me than a CS:GO jump from 250 to 290 fps at 1080.
To me it suggests there's a fundamental flaw in the architecture of AMD CPUs.

If the newest Ryzen 3000 CPUs still run 10-25% slower than Intel CPUs in Arma 3 or racing sims (against an Intel CPU architecture that hasn't progressed substantially in six years) or even 10% then it begs the question wtf have AMD been doing all this time?
If late 2013 release big budget game was made to run basically completely single threaded it's makers of that game which are flawed.
With quad cores appearing five years before that, game had zero justification for that.
For small budget/indie games that would have been understandable, but not for any such level game.

And if later expansions etc haven't fixed that, it kinda suggests either lazyness of maker or problems in code.


And you should be also asking WTF has Intel been doing with all these Meltdown/Spectre, Spoiler, Fallout, RIDL and Zombieload vulnerabilities.
As Intel wasn't hurried in giving users more performance per money, they should have had resources to fix all earlier short cuts taken in CPU design.
I mean Zen uArch managed to avoid such leakiness with design made years before those design flaws were found.
 
I wasn't going to take them as gospel, just some numbers to mull over for the next couple of weeks. I really want the 12c for the futureproof aspect of it. Plus I do a lot of flight simming which requires a lot of various programs to be open at once eating up CPU resources.

In that case then you will know that MS have a new and fully updated flight sim coming out next year. It's no coincidence that Lisa had the head of MS gaming on the stage at E3, it's also no coincidence that MS have suddenly managed to sort out Windows Scheduler as well.
I'm laying full odds that MS flight sim 2020 will need and fully use every core you have.
 
Stock issues for Intel are meant to be quite severe.
We were told continuing until mid 2020 IIRC when I was at Stone.

If AMD get solid supply they'll be able to mop up.

Problem for Stone is they are tiny in comparison to the HP/Dell/IBM's of the world (direct sales will be taking priority over distributors) so even though constraint is due to lessen in Q4, it won't help the smaller fish in the pond well into Q1 or Q2 '20, and by then EPYC Rome servers will be establishing themselves as the obvious choice if you want stock, performance and low TCO, oh yeah and security :p
 
Problem for Stone is they are tiny in comparison to the HP/Dell/IBM's of the world (direct sales will be taking priority over distributors) so even though constraint is due to lessen in Q4, it won't help the smaller fish in the pond well into Q1 or Q2 '20, and by then EPYC Rome servers will be establishing themselves as the obvious choice if you want stock, performance and low TCO, oh yeah and security :p

Exactly.
It's really quite bad, and when they're a smaller player they're not going to be ordering in the quantities that other OEM's will. But if it takes a quarter of the time to ship an AMD system, it speaks for itself the issues it's causing.

Tell you what AMD need to do, is get some 6 core APU's out.
 
Last edited:
I'm calling BS on the 3800x and 3600x scores. Just downloaded and run this on my own 2700x and got the following result.
Is anyone seriously suggesting that a 3600x is only going to beat my 2700x by 54 points ? .............................................................far too much unsubstantiated crap on here :mad:
http://oi65.tinypic.com/uoihj.jpg

When the benchmark tests more than just the CPU, sure. It's possible. I estimated a stock 2700X to be around 13% faster than the 3600X, going off what is known.

This is why I dislike SYSmark as well, and Intel's reliance on it over the past half a decade.


@Martini1991 wtf has any OEM system got to do with this thread ? :mad: You know full well nearly everyone in this thread as well as most other threads on this entire site is an overclocker. If all you want to do is talk about "stock" systems, then wtf are you even bothering posting in here for ?

Calm down, man. I don't see the problem. If it's using Zen 2, then surely it fits in with the scope of the thread.

Jesus! You got the decorators in or something?

LOL.
 
If late 2013 release big budget game was made to run basically completely single threaded it's makers of that game which are flawed.
With quad cores appearing five years before that, game had zero justification for that.
For small budget/indie games that would have been understandable, but not for any such level game.

And if later expansions etc haven't fixed that, it kinda suggests either lazyness of maker or problems in code.


And you should be also asking WTF has Intel been doing with all these Meltdown/Spectre, Spoiler, Fallout, RIDL and Zombieload vulnerabilities.
As Intel wasn't hurried in giving users more performance per money, they should have had resources to fix all earlier short cuts taken in CPU design.
I mean Zen uArch managed to avoid such leakiness with design made years before those design flaws were found.
I don't disagree with anything you say I'd just like more performance from AMD in Arma 3.
I'd also like Intel not to take the p**s with CPU prices and Nvidia to stop doing the same with GPU prices. I imagine the ancient single threaded code from Operation flashpoint is an issue in Arma 3 but both AMD and Intel had CPUs way back then and the disproportionate superiority of Intel to this day is still annoying.
I agree Intel has been having a laugh at our expense (literally and figuratively) and I desperately want it to stop.
 
In that case then you will know that MS have a new and fully updated flight sim coming out next year. It's no coincidence that Lisa had the head of MS gaming on the stage at E3, it's also no coincidence that MS have suddenly managed to sort out Windows Scheduler as well.
I'm laying full odds that MS flight sim 2020 will need and fully use every core you have.


I did see that, it looks stunning and I think it will be a very good flight sim platform for the future :)
 
I think you're right but I'm hoping you aren't otherwise I might just wait for Zen3 as I don't want to spend CPU money with Intel or GPU money with Nvidia (haven't had AMD for 5 years).

All we can really do is wait for reviews; from what AMD has shown they've made some drastic improvements. Although GTA V another game they're usually poor in, only showed an 11% improvement compared to the 2700X with a 3800X.

5YyrN3S.jpg

Then again, according to their slides that allowed them to match Coffeelake performance, despite having a lower clock speed.

0BYWuZi.jpg

Who knows how Arma 3 will handle Zen 2 until reviewer and users get their hands on it.

I'm running a 5820K, so itching to upgrade myself; and my CPU is definitely holding back my GPU in GTA V alone, seeing around 45-57% GPU usage at best, while running 3440x1440 with all settings maxed bar MSAA only set to 4.
 
Have you seen the comment section on that video? He is mauled like never before because his whole case and argument is flawed.
Steve is using slow paced ingame benchmark videos trying to make his case. Not fast paced actual gaming like AMD did, and the target of streaming is all about.

I think the whole thing is just silly.

I noticed there is 80fps caps in the graph, meaning a lot of testing was done at silly framerates, streaming above 60fps is just stupid ion my view. The encoding is done on the cpu when the gpu has accelerated encoding, the vast majority of people will of course use gpu for streaming, making the whole subject silly, professional streamers, the ones with thousands of viewers and who do it full time tend to use dedicated streaming hardware instead of same hardware for both game and stream. The hobby streamers may only use one rig but will also probably just use the gpu to offload it. I streamed gta5 on my 8600k with no dropped frames 60fps 1600p resolution.

Its kind of like reviewers posting encoding and cinebench results when the viewers are only interested in gaming performance.

His point of slow vs fast, I understand is based on the point that no one uses slow as it provides no visual improvement, so that point will be valid.

I expect the reason he made this video because people like me questioned his stance on the principled tech content he made, and he wanted to make an effort to not appear to be an AMD shill. I think the whole thing is stupid, as in reality people wont be doing the workload that AMD showcased.
 
and yet again steves missing the point the test wasnt misleading, he even states the results are repeatable but they didnt try that because reasons, was it a fair test as to how most streamers stream today, no. but as someone who games at 1440p and streams on and off having the extra power to stream at 1440p (or 4k for others) will probably be nice to produce a better quality but hey no one will ever want to move on from 1080p i guess. this video comes across as click bait making outrage out of nothing to try and fill the void until july.

its very misleading.

You dont use a preset just because its there, it provides no visual improvement.
Also hardly any streamers stream in the way AMD showcased.

I agree its click bait tho. SO I am off the opinion the video was made to get some content out, but also that the AMD showcase was stupid and unrealistic.
 
All we can really do is wait for reviews; from what AMD has shown they've made some drastic improvements. Although GTA V another game they're usually poor in, only showed an 11% improvement compared to the 2700X with a 3800X.

5YyrN3S.jpg

Then again, according to their slides that allowed them to match Coffeelake performance, despite having a lower clock speed.

0BYWuZi.jpg

Who knows how Arma 3 will handle Zen 2 until reviewer and users get their hands on it.

I'm running a 5820K, so itching to upgrade myself; and my CPU is definitely holding back my GPU in GTA V alone, seeing around 45-57% GPU usage at best, while running 3440x1440 with all settings maxed bar MSAA only set to 4.
I hear you but surely you can squeeze another 10% from that CPU? I'm on 99/98% from each 1080ti in SLI 3840/2160MSAA 4x so surprised you're CPU limited in GTA 5. What's your mem speed out of interest?
 
I somewhat agree there. I wouldn't be surprised if the same occurrence is happening with Sunny Cove.




I wouldn't pay too much attention to Geekbench scores. It's about as worthless as CPU-Z lol.

geekbench is more relevant to gaming performance than cinebench just to point out.

cinebench tests rendering performance.

This is what geekbench does.

http://support.primatelabs.com/kb/geekbench/interpreting-geekbench-4-scores

Geekbench 4 CPU Workloads
Geekbench 4 uses a number of different tests, or workloads, to measure CPU performance. The workloads are divided into four different subsections:

  • Crypto Crypto workloads measure the crypto instruction performance of your computer by performing cryptography tasks that make heavy use of crypto instructions. While not all software uses crypto instructions, the software that does can benefit enormously from it.

  • Integer Integer workloads measure the integer instruction performance of your computer by performing processor-intensive tasks that make heavy use of integer instructions. All software makes heavy use of integer instructions, meaning a high integer score indicates good overall performance.

  • Floating Point Floating point workloads measure floating point performance by performing a variety of processor-intensive tasks that make heavy use of floating-point operations. While almost all software makes use of floating point instructions, floating point performance is especially important in video games, digital content creation, and high-performance computing applications.

  • Memory Memory workloads measure memory latency and bandwidth. Software working with large data structures (e.g., digital content creation) or with referential data structures (e.g., databases, web browsers) rely on good memory performance to keep the processor busy.

People assume cinebench is some kind of god as all their favourite reviewers use it, they use it not because its meaningful and accurate but because it showcases new products well as it really favours logical threading and higher cores. Thus showcasing both new amd and intel flagship products well.
 
Cinebench is floating point which is gaming performance.
Cinebench multi isn't the best way to test realistic gaming performance, you can extrapolate using it, but best way is single threaded in Cinebench and extrapolating.

Name a CPU that applying that rule doesn't work.

I'll never just take a multithreaded cinebench result to mean anything without context, as context is important.

But Cinebench is FPU heavy, which is what gaming is.
 
If Arma 3 is the ONLY game that AMD is weak at, then surely that suggests there's a "fundamental flaw" in Arma 3

Its not, reviewers tend to only test high profile games "famous games" to put it.

There is many many games where intel will run better than AMD for obvious reasons, but because they not well known, and not on reviewers "radar" the subject isnt touched upon.

Load up lightning returns on a AMD rig the result will be painful. Even on a 5ghz intel last gen chip it can only hit 30-40fps in yasnaan consistently.

Of course lightning returns is just a really bad coded mess, arma3 probably similar. But if you enjoy those games, you dont stop playing the game instead you buy more suited hardware. However zen2 at least brings the gap much closer so a high end zen2 chip will now have performance perhaps within 5-20% of high end intel chips now in these types of games.
 
its very misleading.

You dont use a preset just because its there, it provides no visual improvement.
Also hardly any streamers stream in the way AMD showcased.

I agree its click bait tho. SO I am off the opinion the video was made to get some content out, but also that the AMD showcase was stupid and unrealistic.

It wasnt misleading at all, this might come to a surprise to some people but there is still a lot of ISP's that supply a crappy upload speed.. Slow preset is great for that as it will have better quality while using a lower bitrate of around 2500kbps, where that same bitrate with Very Fast/Faster/Fast preset will look terrible.

And the point was what the viewer sees, which is a slideshow with those settings AMD used on the 9900K.

Those 4 extra cores/8 extra threads will make a huge difference to encoding quality.

If you watch Twitch much you will hear how most streamers get OCD about their stream quality, it really is a thing, including those with stream PC setups.
 
Its not, reviewers tend to only test high profile games "famous games" to put it.

There is many many games where intel will run better than AMD for obvious reasons, but because they not well known, and not on reviewers "radar" the subject isnt touched upon.

Load up lightning returns on a AMD rig the result will be painful. Even on a 5ghz intel last gen chip it can only hit 30-40fps in yasnaan consistently.

While I'll gladly agree that there's games that'll run better on Intel than AMD because Intel has the core for core performance advantage. I must interject that any game which is running 30-40 FPS on a top tier set up is at fault, not the hardware.
 
Back
Top Bottom