• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel to launch 6 core Coffee Lake-S CPUs & Z370 chipset 5 October 2017

They almost certainly use the same process, it's an improved process, nothing more or less. Kabylake wasn't a massive leap in process tech, it's a completely standard tweak, every process ever made has improved little by little over time. This is usually just improved design rules allowing narrower margins for error as the more mature process moves towards the theoretically minimums for feature size. As things get smaller you can use the same die size for slightly more performance or lower power or make the chip slightly smaller and make them cheaper. It doesn't make the process more expensive to use, it's the same equipment, the same silicon and the same time to make(in most cases, when not the difference will be marginal). In other words, for Intel not to use the tweaked process design rules would be choosing a more expensive chip for no reason at all.

There was a very small and very standard change in terms of the process for Kabylake in the first place. Back in previous gens where the gap between nodes was shorter, Intel just released the same chips with a new stepping but the same names, Q6600 C0 stepping is one that springs to mind. AMD often brought out a new stepping everyone wanted as well, it's now process nodes work. There is zero benefit to not using the tweaked process on every new chip you produce from that point forward.

but if I'm not mistake those tweaks don't get transferred all that quickly to the xeon dies.

Haswell e and broadwell e for example, Haswell e was the same process used on the consumer version (from the xeon dies) by the time broadwell e dropped it was what 6 months after skylake launched?

andbyet the used the more ineffectient broadwell die, which had morentrouvle sustaining high clocks (compare the 6950x vs the 7900x out of the box clocks, the 7900x out of the box is higher than a max oc'd 6950x)

if I'm not mistaken this is because it takes them a while to change their lcc/hcc/xccdies over to the new node.

although yes it's not a 'new' node as such it is a different process/architectural tweaks.im sure we could agree the skylake architecture would have better than the broadwell one as it was more matured and refined.

and yes kabylake and skylake are essentially identical, kabylake just has some tweaks for power efficiency to push clocks higher, I'm not sure exactly what tweaks coffeelake brings, but their claim of 30% performance improvement is interesting (if true)



overall the main temp issue was the glue used, so hopefully they've refined that process, I do think that their long term plan was to phase out soldering the hedt platform, obviously they phased it out with ivy bridge, perhaps they wanted to 'test' the consumer line before doing it together high end?

like, Haswell was poor with tim, then devil's canyon made a big improvement on it, skylake did pretty well with tim also, and kabylake has shown their adhesive is overused and creating gaps.

perhaps these last few years of tim 'trial and error' was them for what of a better word testing how to get the process down correctly before.moving to the hedt platform.

I do wonder if it's to do with their xeon line, perhaps they've had issues with some of their server chips being soldered? either in the manufacturing phase or after extended use (say 3-4 years)

this is all my theory of course
 
posted similar in the x299 thread but after seeing the linus video im rather really confused to as wtf intel is playing at with x299 and it worries me as to what they will now do with coffee lake to make x299 seem a better option than it is. if coffee lakes top i5 is basically a old i7 6c12t that makes it better than the kaby lake x299 chips. then what will the i7's do 8c16t which then starts taking more sales away from x299. i get a feeling these two chips will be clocked low to keep x299 the performance leading platform for intel and IF they do a K version it will be priced rather more than in previous generations.


Don't forget that x299 cpus have a higher TDP and no igpu(at least not enabled when it comes to the dumb as hell Kabylake X). I think intel will now make a bigger attempt to market how important the GPU is, you're already paying absolutely through the teeth for the level of GPU performance you get from an Intel igpu. When you realise the 7700k die is about equal size gpu to the CPU block, you're effectively paying £150 for the gpu for which you get equivalent performance to a £50 low end last gen discrete GPU. But still the big differentiator between a 6 core z370 chip and a 6 core x299 chip is the igpu, lower tdp as well. So I expect Intel will try to talk up how important that GPU is and how that is what separates the platforms in their eyes, which in fairness it does.

But that is where another problem shows up, if you're buying because you want the igpu, the igpu can't keep up with the frame rate a quad core can provide let alone a 6 core. A plain old 6 core in mainstream pricing with mainstream cheaper motherboards to go in a gaming system is great, a 6 core with igpu doesn't make an awful lot of sense and a quad core with igpu they both already have and they won't come close to matching gpu performance on Raven ridge.

From the end of the year the only reason I'd want to get a Z370 based system is alongside a cheap i3 or pentium for a entry level system. If you want genuinely good performing APU as in the GPu performance is important, Raven Ridge, if you want a really good value gaming system, 6 or 8 cores without the igpu from Ryzen and if you want crazy as hell HEDT performance for gaming + streaming/encoding/rendering/some kind of heavy workload beyond just gaming, then Threadripper is going to be awesome and beat Intel. Until the 18 core actually launches, apparently not before 2018 at the earliest, then Threadripper won't only be better value but maybe the flat out fastest HEDT chip.

The one thing Intel could really do to get back at AMD and provide something genuinely meaningful as a better option for mainstream is if the 6 core Coffeelake came without an igpu and because it would be a smaller die without igpu, cheaper as well. That would be something that offers a really good gaming chips in the mainstream platform. I've seen nothing to indicate Intel are doing that, but if they were it would make for a genuinely interesting chip and could actually still be quite a bit smaller die size than a current 7700k, probably ~20% smaller. That chip could offer a really good and better value chip where the 6-8 core Ryzen's are currently such a fantastic option over Intel quads with a pointless igpu(for most people).
 
to be honest I just think the new 8700k 6/12 is going to have a few optimizations, and clock to 5ghz on all cores without too much a fuss.

then they can easily market it as the fastest gaming cpu in the world, and they would be right being honest, ryzen hits the wall at 4ghz, and beyond 6 cores doesn't really show any improvement in games, so that 1 ghz+ ipc lead could easily give them that claim.

one thing about tdp, amds numbers are worthless,they calculate tdp under a light / medium workload where as Intel do it under extreme workloads

https://www.pcper.com/reviews/Processors/Overclocking-AMD-Ryzen-7-1700-Real-Winner

tldr: at stock an 1800x draws 150w (not the claimed 95w) which is as much power as intels 6900k and 10 core 6950x, a 1700 at 4ghz draws over 210W.
 
and yes kabylake and skylake are essentially identical, kabylake just has some tweaks for power efficiency to push clocks higher, I'm not sure exactly what tweaks coffeelake brings, but their claim of 30% performance improvement is interesting (if true)

It's already been stated by Intel themselves, that the 30% figure was from a 15w U series processor comparing a Kaby CPU with 2c/4t and 3.5GHz max boost, to a new 4c/8t part with a 4.00GHz boost, so there is no improvement to performance, but maybe some to PPW.
 
https://www.pcper.com/reviews/Processors/Overclocking-AMD-Ryzen-7-1700-Real-Winner

Those are full system loads at the wall (with 10% efficiency loss by PSU).

1700 @ 4ghz is power hungry though as you have to take the voltage so high (1.482v they are using). Then again what is the power consumption of a 6900K @ 4ghz on all cores.


the best comparison is the 6900k vs 1800x, under full load they both clock to 3.7ghz on all cores in stock form, and from that graph it shows they're basically identical in power draw, and they're both 8/16 cpus. (within 5w of each other)

fwiw, the stock 6900k tdp is 140w so it's drawing 20w over that, the stock 1800x tdp is 95w and it's drawing 60w over that.

basically, take the announced tdps with a pinch of salt, both Intel and amd produce bear enough identical power draw when at the same clock speeds in stock configs with the same core counts.

not sure if overall skylake has lower tdp per clock/core though due to architectural changes from broadwell.
 
https://www.pcper.com/reviews/Processors/Overclocking-AMD-Ryzen-7-1700-Real-Winner

Those are full system loads at the wall (with 10% efficiency loss by PSU).

1700 @ 4ghz is power hungry though as you have to take the voltage so high (1.482v they are using). Then again what is the power consumption of a 6900K @ 4ghz on all cores.

Very odd that they are happy to bang on about overclocked power consumption of the Ryzen 1700 but didn't even bother to overclock the competing 6900K at all.

As usual from PCPer its a lot of noise over completely meaningless none substance, why are they not telling what the power consumption is of the Intel CPU's overclocked, what are they afraid of? upsetting their sponsors?
 
Last edited:
the best comparison is the 6900k vs 1800x, under full load they both clock to 3.7ghz on all cores in stock form, and from that graph it shows they're basically identical in power draw, and they're both 8/16 cpus. (within 5w of each other)

fwiw, the stock 6900k tdp is 140w so it's drawing 20w over that, the stock 1800x tdp is 95w and it's drawing 60w over that

basically, take the announced tdps with a pinch of salt, both Intel and amd produce bear enough identical power draw when at the same clock speeds in stock configs with the same core counts.

not sure if overall skylake has lower tdp per clock/core though due to architectural changes from broadwell.

It isn't consuming 60W over the TDP. The entire system at the wall is consuming 60W over the TDP of the chip. In reality it probably is using about 100W.

A poor review.

AMD TDP doesn't equal Intel TDP though, I agree.
 
Last edited:
the best comparison is the 6900k vs 1800x, under full load they both clock to 3.7ghz on all cores in stock form, and from that graph it shows they're basically identical in power draw, and they're both 8/16 cpus. (within 5w of each other)
Ryzen is fully in same class as Intels in energy efficiency when not starting to overvolt and disable all power saving mechanisms.

These are clearly from wall.
https://www.techpowerup.com/reviews/AMD/Ryzen_7_1800X/14.html

Not entirely sure what components of CPU package are included in these but idle readings definitely confirm it's not from the wall draw.
http://www.tomshardware.com/reviews/amd-ryzen-7-1700x-review,4987-8.html
Ryzen actually beats Kaby Lake in that gaming power consumption test.
And also 7700k exceeds its TDP in Prime.



Well it's the next mainstream architecture, and as such will be positioned as an evolution of the current Kaby Lake, keeping the current 2c/4t low end
Which is the problem.
At least on desktop four cores should be minimum to all except absolutely lowest model to improve changes of game developers moving forward in multithreading.
Multithreading optimizing isn't tempting for them if they can't be sure of even having two cores available for game.
Four core limit in high end mainstream CPU has been other problem.

Console CPUs have total garbage performance by modern standards but by multithreading games run on them.
https://www.rockpapershotgun.com/2013/05/27/week-in-tech-hands-on-with-those-new-games-consoles/
 
Ryzen is fully in same class as Intels in energy efficiency when not starting to overvolt and disable all power saving mechanisms.

These are clearly from wall.
https://www.techpowerup.com/reviews/AMD/Ryzen_7_1800X/14.html

Not entirely sure what components of CPU package are included in these but idle readings definitely confirm it's not from the wall draw.
http://www.tomshardware.com/reviews/amd-ryzen-7-1700x-review,4987-8.html
Ryzen actually beats Kaby Lake in that gaming power consumption test.
And also 7700k exceeds its TDP in Prime.



Which is the problem.
At least on desktop four cores should be minimum to all except absolutely lowest model to improve changes of game developers moving forward in multithreading.
Multithreading optimizing isn't tempting for them if they can't be sure of even having two cores available for game.
Four core limit in high end mainstream CPU has been other problem.

Console CPUs have total garbage performance by modern standards but by multithreading games run on them.
https://www.rockpapershotgun.com/2013/05/27/week-in-tech-hands-on-with-those-new-games-consoles/

gjhgh.png


Wow ^^^^ Intel 4 core vs AMD 8 core.. like lol!

Who would have thought that AMD could come up with a vastly more power efficient CPU architecture than Intel.

Remember how AMD was the butt of CPU power consumption jokes? now the boot is on the other foot the "power consumption arguments" have all gone...
 
gjhgh.png


Wow ^^^^ Intel 4 core vs AMD 8 core.. like lol!

Who would have thought that AMD could come up with a vastly more power efficient CPU architecture than Intel.

Remember how AMD was the butt of CPU power consumption jokes? now the boot is on the other foot the "power consumption arguments" have all gone...

It really is a different story this time around. I was on the intel boat for quite some time and read all these anti AMD posts, now its the other way around its mighty quiet lol.
 
Not sure why it's so surprising that AMD's brand new architecture is more power efficient than Intel's improved Pentium III design. Obviously a lot has changed but they haven't had the kick in the teeth to really innovate for a long, long time. You also have to remember the efficiency bands go out the window when you overclock. Ryzen at 4.0 GHz is probably a lot more power hungry than at 3.8 GHz, despite barely performing better. Same with Intel - there'll be a point after which it becomes stupidly inefficient to add more MHz. Dunno where that is for Kaby Lake, maybe somewhere between 4.6 and 4.8 GHz?
 
Not sure why it's so surprising that AMD's brand new architecture is more power efficient than Intel's improved Pentium III design. Obviously a lot has changed but they haven't had the kick in the teeth to really innovate for a long, long time. You also have to remember the efficiency bands go out the window when you overclock. Ryzen at 4.0 GHz is probably a lot more power hungry than at 3.8 GHz, despite barely performing better. Same with Intel - there'll be a point after which it becomes stupidly inefficient to add more MHz. Dunno where that is for Kaby Lake, maybe somewhere between 4.6 and 4.8 GHz?

I'd guess so, it's down to the LPP node amd is using.
since the new skylake x is 140? w at 4.3ghz on an 8 core that should be a fair hump in efficiency I would guess.
 
Back
Top Bottom