Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
So the rest of the system is only using 27 watts?
I suffer from this condition called "BS Aversion", so when I see somebody spouting BS it's difficult to sit idly by.@MartinPrince Just wanted to say thanks for the effort you put into this thread.
I would suggest tho you're wasting your time arguing with the usual AMD die-hards. They are emotionally invested in AMD and will not agree under any circumstances that AMD chips are simply not the best at every task; single-threaded, multi-threaded, any workload - their answer is always that AMD is best.
You won't have any joy no matter how much data and facts you can present. The "truth" is whatever they want it to be.
It's sad because we all know AMD are making good chips these days - they don't need the campaign of disinformation from the AMD super-fans to sell themselves.
But they do still have some (limited) weaknesses against Intel chips and the unbiased among us enjoy reading these threads and seeing it laid bare.
So thanks to you again Was a good read.
The reason i asked is because i'm considering jumping over to AMD and had concerns with Linux as multithreading can still be flakey or non-existent with legacy apps.I've noticed Gnome-Boxes can do some freaky thing like suddenly using 1 core.
Slightly different single threading between the two i can live with.
TBF,AMD does have superior IPC,as Anandtech did measure this.
https://images.anandtech.com/graphs/graph14605/111165.png
Intel however has a higher peak clockspeed,so single threaded performance is higher.
The difference is quite easy to see there.
Let's use the sped2017 result as an example. The 3900x has 7% higher IPC, but the 9900k has 15 to 20% higher clock speed. The extra 15 to 20% clock speed is enough to beat 7% extra IPC
That's the first I'm hearing about it.Clock speed is not linear. 15-20% extra clock won't give 15-20% extra performance.
That's the first I'm hearing about it.
Why wouldn't it be linear, or near as dammit?
That will depend on the software being run.Because it isn't the way processors have worked for as long as I can remember. I am not sure I understand the technical reasons for it, but if you take a 4ghz processor and overclock it to 5Ghz, you won't get a direct 25% performance boost for it.
I remember an old video based on the Intel netburst architecture where Intel explained the complexities of increasing clock. Apparently a 10% speed boost would provide a 6% real world boost with a 30% power increase. That was for netburst so probably not accurate across the board but it shows what I mean.
That will depend on the software being run.
You could easily create a small program using just CPU registers for data access where the perf increase was completely linear.
Sure but nobody has ever said in any thread here before (I've been around) that +20% clock speed is not (close to) +20% performance.
So I'd love to see some evidence to back up this claim, "in the real world".
And obviously that isn't going to be game benchmarks, but a synthetic CPU test.
e: Just because I'd be genuinely interested to see what the claimed diminishing returns actually are.
It'll very likely be different depending on architecture but it has never been linear, with overclocking proving that for a long time. Graphics cards are the same in that way. 10% overclock doesn't mean 10% more frame rate.
But you can't really use games to benchmark a single component like that.It'll very likely be different depending on architecture but it has never been linear, with overclocking proving that for a long time. Graphics cards are the same in that way. 10% overclock doesn't mean 10% more frame rate.