• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD vs Intel Single threading?

@MartinPrince Just wanted to say thanks for the effort you put into this thread.

I would suggest tho you're wasting your time arguing with the usual AMD die-hards. They are emotionally invested in AMD and will not agree under any circumstances that AMD chips are simply not the best at every task; single-threaded, multi-threaded, any workload - their answer is always that AMD is best.

You won't have any joy no matter how much data and facts you can present. The "truth" is whatever they want it to be.

It's sad because we all know AMD are making good chips these days - they don't need the campaign of disinformation from the AMD super-fans to sell themselves.

But they do still have some (limited) weaknesses against Intel chips and the unbiased among us enjoy reading these threads and seeing it laid bare.

So thanks to you again :) Was a good read.
I suffer from this condition called "BS Aversion", so when I see somebody spouting BS it's difficult to sit idly by. :)

The thing is I love my Ryzen 3900X, it is an absolutely brilliant CPU. I've had it pretty much from Day 1 release. Pound for pound the best CPU money can buy right now. The amount of processing power you get for wattage used is ground breaking. I can run mine nearly passively so the fans are hardly spinning and even under full load it will barely touch 80c! Though when, in Single Threaded topic, somebody posts up an IPC chart, and insinuates that "Ryzen is way a head in Single Threaded tasks", then I know that is distorted nonsense.
 
Last edited:
The reason i asked is because i'm considering jumping over to AMD and had concerns with Linux as multithreading can still be flakey or non-existent with legacy apps.I've noticed Gnome-Boxes can do some freaky thing like suddenly using 1 core.

Slightly different single threading between the two i can live with.

I don’t think you would see much difference between the with maybe the exception of cache sizes.
 
TBF,AMD does have superior IPC,as Anandtech did measure this.

https://images.anandtech.com/graphs/graph14605/111165.png

Do not Hotlink images

Intel however has a higher peak clockspeed,so single threaded performance is higher. Even the single picture DxO transcodes are indicative of this,which is a specific workload as a precursor to importing to Lightroom or Photoshop. However,if you do what I do with DxO and tend to batch process things,the AMD CPUs pound for pound are better due to their multi-threaded performance.
 
Last edited by a moderator:
TBF,AMD does have superior IPC,as Anandtech did measure this.

https://images.anandtech.com/graphs/graph14605/111165.png

Intel however has a higher peak clockspeed,so single threaded performance is higher.

The difference is quite easy to see there.

Let's use the sped2017 result as an example. The 3900x has 7% higher IPC, but the 9900k has 15 to 20% higher clock speed. The extra 15 to 20% clock speed is enough to beat 7% extra IPC
 
Last edited by a moderator:
The difference is quite easy to see there.

Let's use the sped2017 result as an example. The 3900x has 7% higher IPC, but the 9900k has 15 to 20% higher clock speed. The extra 15 to 20% clock speed is enough to beat 7% extra IPC

Clock speed is not linear. 15-20% extra clock won't give 15-20% extra performance. It likely will end up being faster but probably only by 1-3%.
 
Clock speed is not linear. 15-20% extra clock won't give 15-20% extra performance.
That's the first I'm hearing about it.

Why wouldn't it be linear, or near as dammit?

In any case, real workloads were shown where the advantage was ~xx%.

Not theoretical, but real, measured workloads.

e: finding the figures...
 
That's the first I'm hearing about it.

Why wouldn't it be linear, or near as dammit?

Because it isn't the way processors have worked for as long as I can remember. I am not sure I understand the technical reasons for it, but if you take a 4ghz processor and overclock it to 5Ghz, you won't get a direct 25% performance boost for it.

I remember an old video based on the Intel netburst architecture where Intel explained the complexities of increasing clock. Apparently a 10% speed boost would provide a 6% real world boost with a 30% power increase. That was for netburst so probably not accurate across the board but it shows what I mean.
 
Because it isn't the way processors have worked for as long as I can remember. I am not sure I understand the technical reasons for it, but if you take a 4ghz processor and overclock it to 5Ghz, you won't get a direct 25% performance boost for it.

I remember an old video based on the Intel netburst architecture where Intel explained the complexities of increasing clock. Apparently a 10% speed boost would provide a 6% real world boost with a 30% power increase. That was for netburst so probably not accurate across the board but it shows what I mean.
That will depend on the software being run.

You could easily create a small program using just CPU registers for data access where the perf increase was completely linear.
 
Sure but nobody has ever said in any thread here before (I've been around) that +20% clock speed is not (close to) +20% performance.

So I'd love to see some evidence to back up this claim, "in the real world".

And obviously that isn't going to be game benchmarks, but a synthetic CPU test.

e: Just because I'd be genuinely interested to see what the claimed diminishing returns actually are.
 
Also if there is not a linear gain from increasing clock speed then there also must not be a linear gain from increasing IPC.

Because both translate to more CPU instructions per unit time.

In other words, the claim that AMD is closer to Intel than the combined difference in IPC and clock speed suggests, must be false.

I don't think that's anything other than simple logic tbh.
 
Sure but nobody has ever said in any thread here before (I've been around) that +20% clock speed is not (close to) +20% performance.

So I'd love to see some evidence to back up this claim, "in the real world".

And obviously that isn't going to be game benchmarks, but a synthetic CPU test.

e: Just because I'd be genuinely interested to see what the claimed diminishing returns actually are.

It'll very likely be different depending on architecture but it has never been linear, with overclocking proving that for a long time. Graphics cards are the same in that way. 10% overclock doesn't mean 10% more frame rate.
 
It'll very likely be different depending on architecture but it has never been linear, with overclocking proving that for a long time. Graphics cards are the same in that way. 10% overclock doesn't mean 10% more frame rate.

you need to pick benchmarks that scale well. That’s why cinebench is good for CPU’s as it scales well with cores and frequency. The progression between 5/5.1/5.2 is very linear on my chip.
 
It'll very likely be different depending on architecture but it has never been linear, with overclocking proving that for a long time. Graphics cards are the same in that way. 10% overclock doesn't mean 10% more frame rate.
But you can't really use games to benchmark a single component like that.

Nor could you, for example, use a system where there is a lot of (storage) io going on to benchmark the delta performance from a CPU overclock.

That might be "real world" but it does not in any way answer the question as to whether CPU performance scales linearly with clock frequency.
 
Clock speeds dont seem to be linear although they are currently not doing too bad in that regard.
Things like memory access speeds and latency will hold them back when just adding more core clocks.
 
Back
Top Bottom