• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ryzen "2" ?

Based on price the Ryzen 1700 is the competition for the 8600k. The 8600k over clocks to an average of about 4.8ghz and the 1700 about 3.9ghz. Then you have 6c / 6t v 8c / 16t. In anything that needs very high single core performance the 8600k is better suited, for anything that makes use of multi core / thread the 1700 is better suited. Even if you choose the much cheaper 1600 the gap is not 30% and the opposite can be true, it depends on what you are doing. Already in games like bf1 4c / 4t is not enough to maintain good steady frame rates.

The unknown is how the future am4 socket cpu`s perform and their relative pricing. And also what intel release and its relative price.
 
AMD can also play this game, though. They can release a 4.5 GHz 4-core Ryzen 2500X and call it a day :D

But their 6-core is slower than their 8-core... And I suspect it will always be, because the 6-core units are silicon that wasn't good enough to be 8-core. Even the overclocks are equal or better for 1800x vs 1600x. (And the 8-core mainstreams weren't good enough to be Threadripper!)

It doesn't seem like they're capped by thermals, but more by the raw voltage you have to stuff into Ryzen to get it beyond 4ghz.

Sadly.

I'd love to have the choice between a 5.0 4-core, a 4.5 6-core and a 4.0 8-core, because that makes sense with regards to selecting the hardware for your workload. Unfortunately that choice right now is Intel or AMD, and you end up paying significantly more for the former.
 
With the new versions looking to clock higher and have better single thread performance its hard to justify an intel chip that is a bit faster in some single threaded stuff but will likely be hammered when things get a bit more threaded.

Just cant see the argument any more, intel need an 8 core non hedt chip with good speeds this year.
 
With the new versions looking to clock higher and have better single thread performance its hard to justify an intel chip that is a bit faster in some single threaded stuff but will likely be hammered when things get a bit more threaded.

Just cant see the argument any more, intel need an 8 core non hedt chip with good speeds this year.

This will become truer as Intel focuses on pushing more cores, i believe someone stated they have been on record saying that mhz is reaching a plateau and that the real way forward is going to be more cores doing the work, which is great because all of those compilers that were written to use Intels chips, will get reworked for more cores which will have a side benefit for AMD cpu's.

AMD has basically forced the stagnant CPU market away from lower core counts to more cores, Intel are jumping in, they'll probably lead again at some point but in the interim AMD will benefit from the change of course.
 
Any concrete news on speed etc yet?
Clock speeds are pretty much known (multiple leaks pointing the same direction): 4.25 GHz max boost on R5 2600X and 4.35 GHz for the R7 2700X. Actual performance is still up in the air, with some leaks showing a large improvement and others showing almost none. We'll need more concrete evidence to work that one out.
 
Well no different than those buying a Core i7 8700K or a desktop Core i7 of any kind for the same reason for gaming,and then saying MT performance is not important just ST performance - not sure why they didn't just buy a Core i3 i3 7350K and overclock that to 5GHZ!! :p

Edit!!

If anything most enthusiasts who buy Core i7s for gaming do it because they themselves believe more and more games will push above 4 threads,and that a Core i3 7350K or Core i3 8350K will run out of grunt quicker.

well you can have 8, 16, 32 threads or whatever. But logical cpus are not automatic gains and can even be a hindrance because of context switching overheads. Where they gain is specifically in scenarios where a core is tied up in a WAIT state waiting for things like ram and i/o processing and a logical core can fill that time with new processing hence HTT cpus running hotter because non HTT cpus cannot be fully utilised for this reason in those type of workloads. If you have e.g. 8 threads but the cpu core itself is not fully utilised, then dont expect more than minimal gains from logical cores.

Interesting cpu performance figures here, note the non htt chips outperforming htt ones for this particular game.

http://gamegpu.com/action-/-fps-/-tps/far-cry-5-test-gpu-cpu

Now my 8600k has 2 extra "real" cores over my older 4670k which to me has genuine value over something like a 4 core with HTT.

So my comments are related to logical cores rather than physical cores. I would prefer a 4 core 3ghz chip over a single core 5ghz chip, going from 1 to 2 cores is definitely a meaningful gain in everyday activities, a single core chip can have windows unresponsive from one rogue app. 2 to 4 cores is also nice, but the clock speed on the quad core would need to be closer to the 4 core chip to make it worthwhile, above 4 cores I feel is diminishing returns in common software. However I was starting to notice bottlenecks in cpu and thread heavy games like gta5 and ff15. So the 2 extra "real" cores got my interest, but also the high performance per core of coffee lake over haswell. Another factor in my recent upgrade was the extra ram/cache performance as well. The important thing of my upgrade was "everything" got faster, if I moved to a ryzen I dont think that would be the case in single core loads.

But yeah my point applies to 8700k's as well hence me getting a 8600k not a 8700k, but at least 8700k's can hit the 8600k per core performance.

Also game's tend to be developed for "mainstream" cpus. So the relevance there will be the likes of what i3 chips can do and maybe low end i5 chips. Low end ryzen chips will also impact this as well. In 10 years core count may be king and performance per core not that relevant, but I feel I buy for "today" not 5-10 years time, by then I will have another newer processor.

I seen similar on my pfsense unit as well, my old pfsense until was a quad core celeron. My new one is a dual core i5, the i5 has almost double the performance per core but half the cores, and the performance on it for accessing the gui, boot times, running scripts etc. is lightning compared to the celeron it replaced.
 
Last edited:
But their 6-core is slower than their 8-core... And I suspect it will always be, because the 6-core units are silicon that wasn't good enough to be 8-core. Even the overclocks are equal or better for 1800x vs 1600x. (And the 8-core mainstreams weren't good enough to be Threadripper!)

It doesn't seem like they're capped by thermals, but more by the raw voltage you have to stuff into Ryzen to get it beyond 4ghz.

Sadly.

I'd love to have the choice between a 5.0 4-core, a 4.5 6-core and a 4.0 8-core, because that makes sense with regards to selecting the hardware for your workload. Unfortunately that choice right now is Intel or AMD, and you end up paying significantly more for the former.

To me, it doesn't make sense because these frequency differencies are too large and will result in virtually equal performance in the end.
Intel manipulates exactly in this way. Theoretically, they can release a 10GHz dual-core chip that will be as fast as a modern 8-core / 16-thread chip.
 
well you can have 8, 16, 32 threads or whatever. But logical cpus are not automatic gains and can even be a hindrance because of context switching overheads. Where they gain is specifically in scenarios where a core is tied up in a WAIT state waiting for things like ram and i/o processing and a logical core can fill that time with new processing hence HTT cpus running hotter because non HTT cpus cannot be fully utilised for this reason in those type of workloads. If you have e.g. 8 threads but the cpu core itself is not fully utilised, then dont expect more than minimal gains from logical cores.

Interesting cpu performance figures here, note the non htt chips outperforming htt ones for this particular game.

http://gamegpu.com/action-/-fps-/-tps/far-cry-5-test-gpu-cpu

Now my 8600k has 2 extra "real" cores over my older 4670k which to me has genuine value over something like a 4 core with HTT.

So my comments are related to logical cores rather than physical cores. I would prefer a 4 core 3ghz chip over a single core 5ghz chip, going from 1 to 2 cores is definitely a meaningful gain in everyday activities, a single core chip can have windows unresponsive from one rogue app. 2 to 4 cores is also nice, but the clock speed on the quad core would need to be closer to the 4 core chip to make it worthwhile, above 4 cores I feel is diminishing returns in common software. However I was starting to notice bottlenecks in cpu and thread heavy games like gta5 and ff15. So the 2 extra "real" cores got my interest, but also the high performance per core of coffee lake over haswell. Another factor in my recent upgrade was the extra ram/cache performance as well. The important thing of my upgrade was "everything" got faster, if I moved to a ryzen I dont think that would be the case in single core loads.

But yeah my point applies to 8700k's as well hence me getting a 8600k not a 8700k, but at least 8700k's can hit the 8600k per core performance.

Also game's tend to be developed for "mainstream" cpus. So the relevance there will be the likes of what i3 chips can do and maybe low end i5 chips. Low end ryzen chips will also impact this as well. In 10 years core count may be king and performance per core not that relevant, but I feel I buy for "today" not 5-10 years time, by then I will have another newer processor.

I seen similar on my pfsense unit as well, my old pfsense until was a quad core celeron. My new one is a dual core i5, the i5 has almost double the performance per core but half the cores, and the performance on it for accessing the gui, boot times, running scripts etc. is lightning compared to the celeron it replaced.

Its more an observation in general(not aimed at you necessarily)when it comes to these kind of arguments,that you have people arguing games won't thread well and then they have a 4T+ CPU in their main gaming rigs as a hedge on games pushing more threads.

If core performance was the only important metric people would not be buying a Core i5 8600/Core i7 8700K over the 4C/4T Core i3 8350K.SKL/KL/CFL all have the same cores,and all can overclock to 4.5GHZ~5.0GHZ anyway,and support 3GHZ RAM,etc.

Plus as a gamer who games at qHD with a GTX1080 on Xeon E3 1230 V2/Core i7 3770,only one maybe two games I play are actually CPU limited to any realworld effect,and both are based on very old engines.

Anything developed in the last few years on newer engines,I have found to be more GPU limited at qHD,and I have had no issues running. Its not surprising when more and more titles have been developed with consoles in mind,and their weak 8 core mobile CPUs. Even if some don't thread well,they are not massively CPU limited in reality and that includes even games like D3,etc which run perfectly fine on an IB CPU. In fact a lot of times people even in online games,servers can be the bigger issue,as during larger battles they can struggle to keep up.

This is why so many people are still using older generation CPUs now,even with reasonably powerful graphics cards.

I am not saying single core performance isn't important but its really dependent on the mix of games you play,and in most cases Ryzen single core IPS isn't that low if you look into it.

For instance in non-gaming scenarios its on average between Haswell X and Broadwell X overall,and that includes both lightly threaded and multi-threaded situations.

However,this is not reflected as well in games.

If anything some of the more outlier situations,are more down to a lack of optimisation for the uarch,which could be seen with the SKL-X CPUs to a degree(I play one of the games where the dev has not bothered patching the game for the uarch it appears). Despite the same cores when compared to SKL/KL/CFL due to the change in the cache arrangements,etc performance at stock even regressed at times:

https://static.techspot.com/articles-info/1450/bench/Average.png

This is why its as important for AMD to try and get game devs to optimise for the uarch,since even if they get a 5GHZ Ryzen CPU out next year,it would mean nothing if at the software level the game is not running optimally on it.
 
Last edited:
that techspot graph seems a very small amount of cpus to test, not a single non htt cpu in that list.

In terms of what cpus people buy, a lot of misinformation flies around, people buy cpus on whats fashionable etc. e.g. the i7 series chips sell very well, lots of people buy one even tho they may never be using htt friendly loads.

The performance of FX chips highlights this, dual core i3's beating quad core FX chips with ease. Thats the difference highlighted with real cores, never mind fake logical cores.

Then you got everyday computing tasks, most win32 stuff is single threaded, browsers are single threaded per tab.

Developers have come out repeatedly stating multi threaded development is difficult and there is a reason there is not universal heavy multi threaded adoption. We will move there slowly and progressively, but I would say unless you specifically have an important need for maxed threaded performance such as encoding then something like a 30% faster i5/i7 is better than a cpu with double/triple cores/threads.

It doesnt help when we see someone like linus plugin 56 core chips to a motherboard and run cinebench to show the power, the problem is cinebench has almost zero relevance on normal pc performance so those types of videos are hugely misleading people.

For AMD if they think the problem is just getting developers to optimise for them then thats wrong, they need to catch up on the clock speed (and if possible also catch up on IPC). Otherwise they will always be behind. Or they could surpass intel on IPC and use that to mitigate the clock speed deficit, either is fine.
 
Last edited:
TBH,on tech forums people make it sound like unless you have the latest 4.5GHZ~5.0GHZ CPU you can't play games. I even know people running old FX CPUs,who are still running newish games OKish,let alone all the people with Haswell CPUs,etc which are significantly faster.

Then you have consoles which are based on Jaguar cores which are around half the single core performance of even an FX CPU. So that is the point,even an oldish PC if upgraded can still give a reasonable level of performance.

In fact the majority of desktop Core i7 owners I know bought them for work related purposes,not for gaming.

Even my Xeon E3 1230 V2 purchase was partly down for non-gaming reasons,and also the fact that under £180 they were significantly cheaper than a Core i7 non-K for some reason.

In the RL,hardly any of the gamers I know really overclock CPUs(almost all are not hardware enthusiasts),and as a GTX1080 owner,I have the fastest card amongst all of them. Almost all are stuck at 1080p.The odd few have a GTX1070(more down to the VR headset one or two bought AFAIK),but most are lucky to be on anything better than GTX970/R9 390/RX480/GTX1060 level performance and tend to be mostly GPU limited.

Any of my mates who have Haswell based systems for example,will probably benefit more from a GPU upgrade than a CPU upgrade when it comes for gaming.

Even the two games I have performance issues with is because, one I have pushed the engine quite a bit with building huge settlements(and modding it too) and the other is partly down to the servers probably crapping out during 100~300 person battles.

In the realworld people don't really care if they are getting over 60FPS in most games,as they are still stuck mostly at 1080p with 60HZ monitors,and for general stuff,even a tablet running a much slower ARM based core,is sufficient for office work and web browsing.

The market agrees,since in general PC sales for the average person are declining as people keep their laptops and desktops longer and longer,and as ARM based phones and tablets have also been affecting sales. I have certainly have even been seeing more and more people not replacing an old laptop or desktop with a new one,and just using their iPad or slower Android equivalent,etc as their main computing device. Most people are not content creators,they are consumers of content.

Many businesses are still stuck on oldish hardware and only replace them when the support contract ends. I also have an ancient Llano based CPU which I won in a competition which is in a secondary system,and TBH its perfectly fine for YT,web browsing and office work,which it is used mainly for.

The major limitations where the HDD and RAM quantity - upgrading to 8GB of RAM and an SSD,were noticeable. If codec support for the IGP becomes an issue in the future,I can just chuck in a GT1030 to sort that out.

Remember,we are hardware enthusiasts on forums,so we are not really representative of the computer market as a whole,just like people on a Ferrari forum saying a 2010 Masarati is "slow" and "handles worse" than their 2018 Ferrari,but are not really representative of car owners in general,who are lucky to have anything near the performance levels of a 2010 Masarati.
 
Last edited:
Yes if you look at it that way, both ryzen and modern intel performance chips are pointless :)

overclocking itself is pointless. :)

The hardware is useable of course, just slower. That pentium g4400 I brought for my mining rig also cannot play some youtube videos without lowering the resolution tho especially 60fps youtube videos, same with twitch. So I wouldnt say they fine on a generic level.

Also I upgraded my laptop purely for web browsing performance reasons, my old laptop was a core2duo and I have some guides open on the laptop sometimes when gaming, and some guides which were graphically heavy (and perhaps coded badly for performance) had really bad scrolling performance, which was improved by upgrading to a laptop with better spec'd cpu.
 
The hardware is useable of course, just slower. That pentium g4400 I brought for my mining rig also cannot play some youtube videos without lowering the resolution tho especially 60fps youtube videos, same with twitch. So I wouldnt say they fine on a generic level.
Even modern CPUs struggle to decode video because more complex codecs are being used to save bandwidth and they are really difficult to decode in real time, especially 50+ FPS videos. It's all about fixed-function decoding (and even encoding) these days, which basically means you can use any old CPU for such tasks as long as your GPU is modern. It's also why something like a modern Pentium/i3/R3 might be better than an older Core i5 or i7 for general users.
 
Clock speeds are pretty much known (multiple leaks pointing the same direction): 4.25 GHz max boost on R5 2600X and 4.35 GHz for the R7 2700X. Actual performance is still up in the air, with some leaks showing a large improvement and others showing almost none. We'll need more concrete evidence to work that one out.

Thank you.
 
TBH,on tech forums people make it sound like unless you have the latest 4.5GHZ~5.0GHZ CPU you can't play games. I even know people running old FX CPUs,who are still running newish games fine,let alone all the people with Haswell CPUs,etc.

Lol, it's true, but it's also very situational and sometimes you may simply hit a performance wall :)

CPU limits hit me hardest in Kerbal Space Program, where it translates directly to the number of parts I can have in a scene before I start feeling the fps start to judder. A 20% jump in CPU speed pretty much means 20% more parts which means <an amount> more fun with the game. Yes I've invested far more in the hardware than KSP itself costs :rolleyes: As for the people who play that game on ancient potato laptops... I guess they either have less ambitious builds, or just shrug it off when they're getting 5 fps.

Basically everything else I do is fine though. Cities Skylines was choking on my old 2500k, but any decent 4c/8t processor does much better with that particular beast, and any hex core with or without hyperthreading will pretty much tame it.
 
Yes if you look at it that way, both ryzen and modern intel performance chips are pointless :)

overclocking itself is pointless. :)

The hardware is useable of course, just slower. That pentium g4400 I brought for my mining rig also cannot play some youtube videos without lowering the resolution tho especially 60fps youtube videos, same with twitch. So I wouldnt say they fine on a generic level.

Also I upgraded my laptop purely for web browsing performance reasons, my old laptop was a core2duo and I have some guides open on the laptop sometimes when gaming, and some guides which were graphically heavy (and perhaps coded badly for performance) had really bad scrolling performance, which was improved by upgrading to a laptop with better spec'd cpu.

Well,its why when people ask for advice in upgrade threads,its useful to check what games they are playing,and see what the real bottlenecks are.

I don't general overclock since I prefer mini-ITX systems,and with modern CPUs having higher relative boost clocks,even though overclocking does up performance,its not like in the past when I had a 1.8GHZ E4300 overclocked to 3.1GHZ,which was a huge increase!!

Also,not had that issue with the Llano system with general stuff,but it had a very solid IGP for its era though,which helped and I would suspect if decode and UI acceleration become an issue,a GT1030 would probably help.

I think part of the movement to the use of fix function hardware and GPUs to help offload operations off the CPU is down to tablets and phones which are trying to do PC things,but mostly don't have the same level of CPU power(although the Apple ARM based cores look impressive).


Lol, it's true, but it's also very situational and sometimes you may simply hit a performance wall :)

CPU limits hit me hardest in Kerbal Space Program, where it translates directly to the number of parts I can have in a scene before I start feeling the fps start to judder. A 20% jump in CPU speed pretty much means 20% more parts which means <an amount> more fun with the game. Yes I've invested far more in the hardware than KSP itself costs :rolleyes: As for the people who play that game on ancient potato laptops... I guess they either have less ambitious builds, or just shrug it off when they're getting 5 fps.

Basically everything else I do is fine though. Cities Skylines was choking on my old 2500k, but any decent 4c/8t processor does much better with that particular beast, and any hex core with or without hyperthreading will pretty much tame it.

Good old KSP - more struts!! :p

Well I have the same issue with FO4,since I started becoming more of a town builder than a Lone Wanderer!!:D

I have multiple settlements(one which had a 23 story tower),and 100s of settlers,and even increased NPC spawns a bit,etc. However,you could notice the performance drops. Interestingly enough if you are more a Lone Wanderer,performance is generally OK.
 
Even modern CPUs struggle to decode video because more complex codecs are being used to save bandwidth and they are really difficult to decode in real time, especially 50+ FPS videos. It's all about fixed-function decoding (and even encoding) these days, which basically means you can use any old CPU for such tasks as long as your GPU is modern. It's also why something like a modern Pentium/i3/R3 might be better than an older Core i5 or i7 for general users.

the gpu was a gtx 1070 ;)
 
Clock speeds are pretty much known (multiple leaks pointing the same direction): 4.25 GHz max boost on R5 2600X and 4.35 GHz for the R7 2700X. Actual performance is still up in the air, with some leaks showing a large improvement and others showing almost none. We'll need more concrete evidence to work that one out.

Only thing we dont know is what XFR will do on a good board with good cooling, is it going to be a paltry 50mhz like last time or will it go nuts and pass 4.5ghz?
We also dont know if the 2800x will appear, it prob will and in response to Intel doing something - 4.5ghz or more zen+ chips will fly if they can do it.
 
Only thing we dont know is what XFR will do on a good board with good cooling, is it going to be a paltry 50mhz like last time or will it go nuts and pass 4.5ghz?
We also dont know if the 2800x will appear, it prob will and in response to Intel doing something - 4.5ghz or more zen+ chips will fly if they can do it.

We also don't know whether there will be any overclocking headroom over listed frequencies. Most Ryzen 1xxx chips seem to max around 3.9-4.0, which is slightly under the XFR limit...
 
Back
Top Bottom