• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

So what's the verdict so far on the Ryzen Processors?

intel currently doesn't have the fastest processor overall - that should be some EPYC or Threadripper.


https://www.extremetech.com/computi...f-intel-skylake-sp-xeon-massive-server-battle

It is worth going to the actual review itself - where just as often Intel has a faster solution such as:

https://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade/18

And the closing thoughts:

anandtech.com said:
With the exception of database software and vectorizable HPC code, AMD's EPYC 7601 ($4200) offers slightly less or slightly better performance than Intel's Xeon 8176 ($8000+). However the real competitor is probably the Xeon 8160, which has 4 (-14%) fewer cores and slightly lower turbo clocks (-100 or -200 MHz). We expect that this CPU will likely offer 15% lower performance, and yet it still costs about $500 more ($4700) than the best EPYC. Of course, everything will depend on the final server system price, but it looks like AMD's new EPYC will put some serious performance-per-dollar pressure on the Intel line.

Who has the fastest CPU depends significantly on workload at the moment with neither netting significantly more wins across a broader range of tasks.
 
Some interesting opinions in this thread. The real bottom line really comes down to who you are as a consumer, and how accommodating you are when it comes to platform quirks. This really seperates the two at this point in time.
 
Yeah Intel trades blows with AMD, there's not much in it. Except it took Intel 9 tries to get there. AMD are still on their 2nd gen. That's insanely fast progress.
 
Last edited:
Personally I dont consider the 9900k a good chip.

If you like buildzoid (who made it clear in a recent video he doesnt really care about TDP and thermal efficiency) and all you care about is raw performance then yeah 9900k is fastest mainstream chip although not really priced at mainstream levels in my opinion.

But the 9900k breaches its own TDP spec even in stock turbo config, and also runs really hot needing some high end cooling to avoid thermal throttling even in stock config. I think in the 9 series intel lost the plot in desperation to compete on core counts.
 
But the 9900k breaches its own TDP spec even in stock turbo config, and also runs really hot needing some high end cooling to avoid thermal throttling even in stock config. I think in the 9 series intel lost the plot in desperation to compete on core counts.

This isn't actually true - as I posted in the Xeon w-3175 thread https://forums.overclockers.co.uk/posts/32531355 people are mixing up TDP and power draw again also Intel fully documents the behaviour in the relevant thermal and power datasheet.

One of the problems is for a long time AMD have quoted figures closer to what people think TDP is, and better reflects real world power draw, while Intel's is the actual TDP.

Do agree Intel seems to have lost the plot with the 9 series.
 
Do agree Intel seems to have lost the plot with the 9 series.

They have to compete with what they have, or be seen to compete(with respect to the investers), and to get a 8c/16t CPU to 5GHz needs that much power.

Intel does now have very good manufacturing @ 14nm, the problem being now, after soo long on 14nm and 14nm+++, is that any 10nm process will be a regression with respect to CPU performance - be it power/frequency and defects per mm. It will take a few years to get 10nm to the same metrics of their current 14nm process and that is after they get 10nm out the door.
 
Intel does now have very good manufacturing @ 14nm, the problem being now, after soo long on 14nm and 14nm+++, is that any 10nm process will be a regression with respect to CPU performance - be it power/frequency and defects per mm. It will take a few years to get 10nm to the level of their current 14nm process and that is after they get 10nm out the door.

Intel is shifting focus to 7nm which isn't dependant on them getting their 10nm working/fixed.

Haven't checked but I suspect their 10nm will end up being used for stuff like their chipsets that are currently on 22nm and more limited selection of CPU lines.
 
Last edited:
Intel makes faster processors period. Always have....

If you want the FASTEST processor, you buy an Intel i9 9900K today and it's the fastest all around hands down by fact. However in July..... It may not be.
Intel has been challenged many times during history of x86 CPUs and got into its current position only through lots of at the least very questionable and many times criminal means, which should have resulted in upper management getting thrown to jail couple times.
http://jolt.law.harvard.edu/digest/intel-and-the-x86-architecture-a-legal-perspective
In fair and honest competition Intel would have never gotten into its position of last ten years.

9900K's eight core processing power getting matched at 50W lower power consumption tells that Intel isn't going to enjoy single thread performance advantage long.
With that kind power consumption advantage AMD has good room for tweaking boost clocks.
Even if TSMC's 7nm node doesn't mature some between making that engineering sample and mass production.
And with chiplet design and room and position for another chiplet confirmed it's pretty much guaranteed that AMD will keep pushing core counts.
Which they can do easily while staying inside 9900K's true power consumption.

Sure 9900K is now the fastest desktop platform CPU.
But for its ludicrous pricing it will be insanely lot worser at holding value than all those, quaranteed to stay high end for many years, £300 Intels of the earlier decade.
 
Even if TSMC's 7nm node doesn't mature some between making that engineering sample and mass production.

Their second revision with EUV seems to be shaping up nicely, IIRC risk production already running in limited numbers, with good power savings possible.
 
This isn't actually true - as I posted in the Xeon w-3175 thread https://forums.overclockers.co.uk/posts/32531355 people are mixing up TDP and power draw again also Intel fully documents the behaviour in the relevant thermal and power datasheet.

One of the problems is for a long time AMD have quoted figures closer to what people think TDP is, and better reflects real world power draw, while Intel's is the actual TDP.
While TDP's definition can be argued it should be clear that Intel is using it to deliberately mislead people.

That bloating of real power consumption has happened multiple times in the past.
Like what happened with Intel's NetBurst/Pentium 4 when AMD had completely superior architecture.
(making Intel also utilize those illegal means to maintain market position)
And then when Core 2 took performance crown AMD went nuts in power hogging of highest models.


Intel is shifting focus to 7nm which isn't dependant on them getting their 10nm working/fixed.
Intel certainly has the money to have successive nodes in development simultaneously.
But you can't climb to top of the tree without first going through lower branches.
So before Intel can solve certainly existing new challenges of 7nm, they have to solve problems of 10nm.

And also TSMC certainly has resources for development, unlike GlobalFoundries.
And Samsung as whole dwarfs Intel, if its top leaders decide to push fab development.
So it can't be excluded from high performance node "race".
 
While TDP's definition can be argued it should be clear that Intel is using it to deliberately mislead people.

TDP itself has never been directly about power drawn from the wall but obviously *physics* it is connected and has a relationship.

As per my post linked Intel specifically describe what their TDP measurement is but people, including some notable YouTubers, ignore it and slap their own interpretation on it.

EDIT: I actually prefer AMD's approach to it (though better when they use terms like typical board power) because it fits better with what people instinctively think but Intel isn't being as misleading as people imply.

Intel certainly has the money to have successive nodes in development simultaneously.
But you can't climb to top of the tree without first going through lower branches.
So before Intel can solve certainly existing new challenges of 7nm, they have to solve problems of 10nm.


And also TSMC certainly has resources for development, unlike GlobalFoundries.
And Samsung as whole dwarfs Intel, if its top leaders decide to push fab development.
So it can't be excluded from high performance node "race".

It really doesn't work like that in this case - Intel was gearing up their 7nm parallel (obviously there are some lessons that can be learned on both sides) with their 10nm infact I think they "broke ground" on the 7nm side first. They absolutely can and probably will have a fully working 7nm without fixing the problems at 10nm as a lot of it revolves around the patterning/lithography issues (as much as mismanagement) which they already knew would need to be approached differently at 7nm.
 
Last edited:
I currently have a 2700x. I ‘upgraded’ from an i7 7700k. Truth be told it’s a downgrade in most every day scenarios. The system feels less snappy, Lightroom and photoshop are faster in batch processing with the AMD chip but it’s still a go make a cup of tea job with 8 cores as it is with four. Both Lightroom and photoshop are now slower in routine usage such as to open a file or even simple tasks like cropping. I’m not saying the AMD chip is slow but that ‘snap to’ immediacy of the i7 just isn’t there.

As for gaming we’ll im limited to 60hz so 2700x hasn’t shown me up here.
 
I currently have a 2700x. I ‘upgraded’ from an i7 7700k. Truth be told it’s a downgrade in most every day scenarios. The system feels less snappy, Lightroom and photoshop are faster in batch processing with the AMD chip but it’s still a go make a cup of tea job with 8 cores as it is with four. Both Lightroom and photoshop are now slower in routine usage such as to open a file or even simple tasks like cropping. I’m not saying the AMD chip is slow but that ‘snap to’ immediacy of the i7 just isn’t there.

As for gaming we’ll im limited to 60hz so 2700x hasn’t shown me up here.

"Something I haven't seen you mention that is very important is that many apps are compiled for Intel-specific optimizations and perform poorly on AMD for that reason. Have you thought of a different, more open-source benchmark that could be compiled for both sets of optimizations so we can get a true oranges--oranges comparison?"
https://www.pugetsystems.com/labs/a...erformance-AMD-Ryzen-2-vs-Intel-8th-Gen-1136/
 
"Something I haven't seen you mention that is very important is that many apps are compiled for Intel-specific optimizations and perform poorly on AMD for that reason. Have you thought of a different, more open-source benchmark that could be compiled for both sets of optimizations so we can get a true oranges--oranges comparison?"
https://www.pugetsystems.com/labs/a...erformance-AMD-Ryzen-2-vs-Intel-8th-Gen-1136/

People would rather use their industry standard software...............
 
Having had a few high end chips in the last few years 4770k , 5820k , 6700k , 2700 i can say this. generally they all feel the same in terms of I/O. this snappiness people seem to refer to is a misnomer and generally lies in the subsystem and or tweaking of said system.

I am however a backwards kind of fellow and refused to chuck intel any more pennies after my 5820k. ( 6700k was free ) so i upgraded to a 2560x1080 166hz panel , thus you would think needing that extra single threaded grunt from an intel CPU for that res and hz i would have jumped on a 8700k or equiv. but no... i felt zen+ was the right time to jump back to the days of AMD ( last good amd chip i had being a barton )

paired with a 2080 , im often sitting around 120fps in 64man BFV , 150fps in rainbow six seige , 166fps in dirt rally 2.0 all generally maxed out with a tweak of AA

the only game i have found it to be a massive bottlneck to the GPU is farcry ( + New dawn is awfully optimized ) , and yeah ubi still aint great at multicore support. )

I love my zen baby , and feel i got great value for my money , 50 quid less than i paid for my 5820k new and with better IPC , 2 extra cores , 4 extra threads and more power efficient. its not a massive upgrade but compared to what intels been offering it was right up my ally.
 
Having had a few high end chips in the last few years 4770k , 5820k , 6700k , 2700 i can say this. generally they all feel the same in terms of I/O. this snappiness people seem to refer to is a misnomer and generally lies in the subsystem and or tweaking of said system.

I am however a backwards kind of fellow and refused to chuck intel any more pennies after my 5820k. ( 6700k was free ) so i upgraded to a 2560x1080 166hz panel , thus you would think needing that extra single threaded grunt from an intel CPU for that res and hz i would have jumped on a 8700k or equiv. but no... i felt zen+ was the right time to jump back to the days of AMD ( last good amd chip i had being a barton )

paired with a 2080 , im often sitting around 120fps in 64man BFV , 150fps in rainbow six seige , 166fps in dirt rally 2.0 all generally maxed out with a tweak of AA

the only game i have found it to be a massive bottlneck to the GPU is farcry ( + New dawn is awfully optimized ) , and yeah ubi still aint great at multicore support. )

I love my zen baby , and feel i got great value for my money , 50 quid less than i paid for my 5820k new and with better IPC , 2 extra cores , 4 extra threads and more power efficient. its not a massive upgrade but compared to what intels been offering it was right up my ally.

I just leave my 2700 at stock gaming. According to SVI2 it only uses about 45W.

https://i.imgur.com/ghDT2My.png

I had a single core Sempron that used 45W.

Oc'ed, though, to 4.1GHz uses as much as the X version. About 135W.
 
Last edited:
"Something I haven't seen you mention that is very important is that many apps are compiled for Intel-specific optimizations and perform poorly on AMD for that reason. Have you thought of a different, more open-source benchmark that could be compiled for both sets of optimizations so we can get a true oranges--oranges comparison?"
https://www.pugetsystems.com/labs/a...erformance-AMD-Ryzen-2-vs-Intel-8th-Gen-1136/
Well in Aida 64 benchmarks it ‘blows’ my old intel cpu away. That doesn’t help me edit a photograph though;).

This is the untold story of AMD chips yes you can whack as many weaker cores together as you like to get a ‘great’ chip but unless the software you use can take advantage of that setup it’s no better than and actually worse than a four core intel chip.

Don’t get me wrong it’s far from slow and maybe If I was using a very specific set of software that can utilise 16 threads... but I’m not and for my usage intel 4 core > AMD 8 core.
 
Well in Aida 64 benchmarks it ‘blows’ my old intel cpu away. That doesn’t help me edit a photograph though;).

This is the untold story of AMD chips yes you can whack as many weaker cores together as you like to get a ‘great’ chip but unless the software you use can take advantage of that setup it’s no better than and actually worse than a four core intel chip.

Don’t get me wrong it’s far from slow and maybe If I was using a very specific set of software that can utilise 16 threads... but I’m not and for my usage intel 4 core > AMD 8 core.

Well, you are wrong because Windows needs as many cores as possible to run applications simultaneously. Also, the Zen cores are not weak. How did it come to you? It is the old trick by intel and adobe in which they cheat the AMD users.
 
Back
Top Bottom