• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel’s 10nm Cannonlake Delayed to 2H 2017

Viable if you want to buy a second hand Xeon only, and miss out on all the new fun stuff, such as USB3, USB3.1, Sata3, PCI-E v3, M.2, UEFI fan control, Sata Express, DDR4 speed/capacity, and a cooler, quiet, power efficient system (1366 xeons power consumption when overclocked is crazy)

Had a 980 on an Asus Ws board before current setup

Usb3/sata3 - yep had that via a single pci-e expansion board
Uefi fan control - don't use it now, use fan controller
Ddr4 - little improvement over ddr3 for almost everyone so far. Had 12gb which was and still is enough for pretty much everything with three slots to spare
Sata express !!! What a waste of time practically nothing uses it
Power efficiency - 32nm six core not too bad even when clocked to 4.4ghz
Pcie v3 little gain over pcie2 only just now starting to show its worth for most people
M.2 can be added via pci-e card, probably not as a boot drive though

Not bad for a motherboard bought seven years ago!
 
Last edited:
Had a 980 on an Asus Ws board before current setup

Usb3/sata3 - yep had that via a single pci-e expansion board
Uefi fan control - don't use it now, use fan controller
Ddr4 - little improvement over ddr3 for almost everyone so far. Had 12gb which was and still is enough for pretty much everything with three slots to spare
Sata express !!! What a waste of time practically nothing uses it
Power efficiency - 32nm six core not too bad even when clocked to 4.4ghz
Pcie v3 little gain over pcie2 only just now starting to show its worth for most people
M.2 can be added via pci-e card, probably not as a boot drive though

Not bad for a motherboard bought seven years ago!

Plus add to that theres 1366 boards which have usb3 and sata3 onboard, I used to have one, a gigabyte X58A-UD3R
 
From what I've read, 2H '17 for desktop at least is unrealistically optimistic. Anywhere between Q1 and Q3 '18 is more likely IMO. It could easily turn into another Broadwell.
 
Meh, computers have reached a stage where CPU ability is becoming less important.

For most things sandybridge is still enough.
 
Last edited:
Cpu`s have really not gone any where in the last 5 years and from the look of it will not be for next five.

the itch always has me looking for new cpus etc, but the reality is my 2500k is still so strong, main reason i can see to upgrade nowdays are the new motherboard features
 
Don't see myself moving from X99 for a long time. Although if Zen is half decent I will give it a go. Would be nice to try something other than Intel for a change.
 
I don't think it's right to say that CPUs haven't gone anywhere in the last few years...
They just haven't been improving in the areas where enthusiasts would like them to.

We've had speed improvements, alongside aggressive power savings. Works for me!
 
Skylake is roughly 60% faster than Nehalem clock-for-clock. It's not that this improvement is bad or not appreciated, it's just that the improvement was over 7 years and it doesn't compare favourably to the improvements made 7 years prior (when the CPU market was much more competitive). Still 4 cores, no noticeable improvement to HyperThreading, no game-changing new instruction sets, no huge jump in clock speeds, fabrication and heat issues, faster PCIe and RAM making minimal difference, etc.

Part of the issue is that GPUs are now much more important for many applications, including gaming, which isn't exactly Intel's fault. The fact that prices go up every generation doesn't help either.
 
Last edited:
IMO Intel got lucky that the mobile market has taken off when it did and there's a big demand now for CPUs working on as few watts as possible. Mobiles, tablets, Surface etc we're doing more with less watts, which is a good thing, it's greener and more efficient. Rather then carry on and pump more power at the problem - that wasn't really possible as you simply can't cool down a 2cm square surface that has 200watts of heat coming out of it. Moore's Law states more transistors get added, the lovely side effect of that is usually performance goes up, but at some point exponential grow has to stop. GPUs seem to be making better gains, but this is more to do with the type of calculations they perform. Maybe a few more drops of performance can be squeezed out of silicon. Maybe this is it. Now here's Carol with the weather.. :)
 
Skylake is roughly 60% faster than Nehalem clock-for-clock. It's not that this improvement is bad or not appreciated, it's just that the improvement was over 7 years and it doesn't compare favourably to the improvements made 7 years prior (when the CPU market was much more competitive). Still 4 cores, no noticeable improvement to HyperThreading, no game-changing new instruction sets, no huge jump in clock speeds, fabrication and heat issues, faster PCIe and RAM making minimal difference, etc.

Part of the issue is that GPUs are now much more important for many applications, including gaming, which isn't exactly Intel's fault. The fact that prices go up every generation doesn't help either.

thing is 15 to 7 years ago was a golden cpu age. Things were booming, people were trying all sort of new things and how to build a better architecture.
Now both intel and AMD know how things work, and there are starting to be less and less NEW things that they can use to improve.
that being said, HT and multithreading has got better and better with every new architecture.
Instruction sets? are the AVX ones non existent?
also there is a LIMIT to the clock speeds, and small jumps are there almost each new wave.
fabrication and heat issues - product of manufacture decisions. You'd think the newer cpu's are cheaper to produce, but that is most likely wrong. think iGPU's and how much they advanced from the first gen up until skylake.

also, why would they :
1) add new cores
2) try to rise the clock speeds

the great majority of apps still use 1 core. And those who use multiple are doing it in non-optimum ways (most of the time)
I feel that at this point in time Hardware isn't the problem, it's actually software, poor written software, old software. I think we are trying at the moment to kill a mosquito with a handgun, and because we miss the first though is we need a bigger gun; when in truth, all we need is actually a zapper.

there are barely a few games that use anything close to 3-4 cores, and i would wager they are not doing things in the most optimum way for one of two reasons:
1) lack of time to refactor
2) dependencies on other modules

in most of the cases will go in the first reason. Everything is rushed up the window and patch afterward. Refactoring for optimizations should be a ongoing part of any big project (it rarely is, but this would be the best way to do it)

endrant;
 
thing is 15 to 7 years ago was a golden cpu age. Things were booming, people were trying all sort of new things and how to build a better architecture.
Yes, partially because of competition.

Now both intel and AMD know how things work, and there are starting to be less and less NEW things that they can use to improve.
that being said, HT and multithreading has got better and better with every new architecture.
Says who? Where are the benchmarks to back this up?

Instruction sets? are the AVX ones non existent?
What uses AVX? It is not game-changing by any means.

also there is a LIMIT to the clock speeds, and small jumps are there almost each new wave.
fabrication and heat issues - product of manufacture decisions. You'd think the newer cpu's are cheaper to produce, but that is most likely wrong. think iGPU's and how much they advanced from the first gen up until skylake.
The only reason they are die-shrinking is to increase the space available to IGPs. Since they refuse to release mainstream parts with more than four cores, they could have just stuck to 32 nm for the unlocked models and ditch the IGP. Potentitally cheaper to produce, although it depends on how the fabs are set up I suppose. Clock speeds haven't really increased since Sandy Bridge: the i7-2700K has the same clock speeds as the i7-4770K. The i7-6700 doesn't even increase this - the i7-6700K does but it also hugely decreases turbo to a point where the increase is minimal. The unlocked chips are obviously designed for overclocking though and one of the sticking points for Haswell in initial reviews was that it didn't clock as well as the previous generation. Then there's the whole TIM v solder issue...

also, why would they :
1) add new cores
2) try to rise the clock speeds

the great majority of apps still use 1 core. And those who use multiple are doing it in non-optimum ways (most of the time)
I feel that at this point in time Hardware isn't the problem, it's actually software, poor written software, old software. I think we are trying at the moment to kill a mosquito with a handgun, and because we miss the first though is we need a bigger gun; when in truth, all we need is actually a zapper.
The primary reason is competition. But yes, software is often not written well for multithreaded CPUs, and I feel as if programmers didn't take the whole multithreaded thing seriously at first and are now playing catch-up.

there are barely a few games that use anything close to 3-4 cores, and i would wager they are not doing things in the most optimum way for one of two reasons:
1) lack of time to refactor
2) dependencies on other modules

in most of the cases will go in the first reason. Everything is rushed up the window and patch afterward. Refactoring for optimizations should be a ongoing part of any big project (it rarely is, but this would be the best way to do it)

endrant;
Agreed but the reality is that games are not the be-all and end-all. Plenty of applications can and do exploit multiple CPU cores and would benefit from 6-8 core CPUs being mainstream by now. Games would too, but perhaps not as much. The reason 6-8 cores are not in mainstream CPUs is because of...you guessed it...lack of competition.

As a side note, I hate the current development cycle for games where they are essentially released in beta form and patched until they work for the first few months. It also contributes to a damaging culture of "this'll do, we'll program it how we're meant to later" and, of course, there's never any time to do that later because which business is going to justify spending thousands of pounds on "investment" when they could keep churning out new sub-standard products to customers? Shareholders don't care about the long-term benefits of taking the time to do things properly, sadly.
 
Competition or no competition, Intel are shooting themselves in the foot, Ill give you a reason why they should bother. I have had my sandybridge for I guess 5 years. How many CPU"s have I bought since that time? Zero. How many times Have I bought a new Top of the Line GPU in this time? I dare not think let alone adding the costs up. Simply the GPU manufacturers give you a reason to spend your disposable income with reasonable upgrades. You have to wonder why intel would not be happy with getting £300 every year instead of a big fat nothing from those of us that like to keep our systems the best.
 
Yes, partially because of competition.
ofc the lack of competition has a part(and not small) but i did not stress that because that was the obvious reason for some of the problems.

Says who? Where are the benchmarks to back this up?
take a look at haswell vs skylake open an spreadsheet and write some formulas. I did, my conclusion was at that point that skylake had a avg of 7% of single core improvement and around 12-14% multi core improvement over haswell. You could say that multi threading works better on skylake

What uses AVX? It is not game-changing by any means.
hmm game engines should use AVX. Also take one thing into consideration cpu instruction sets are used directly :) (most of the time). Basically anything that uses float-point computation takes advantage of the new AVX instruction set.

The only reason they are die-shrinking is to increase the space available to IGPs. Since they refuse to release mainstream parts with more than four cores, they could have just stuck to 32 nm for the unlocked models and ditch the IGP. Potentitally cheaper to produce, although it depends on how the fabs are set up I suppose. Clock speeds haven't really increased since Sandy Bridge: the i7-2700K has the same clock speeds as the i7-4770K. The i7-6700 doesn't even increase this - the i7-6700K does but it also hugely decreases turbo to a point where the increase is minimal. The unlocked chips are obviously designed for overclocking though and one of the sticking points for Haswell in initial reviews was that it didn't clock as well as the previous generation. Then there's the whole TIM v solder issue...
well after 4770k there is a 4790k (for the clock speed argument)
Also the problem wasn't the TIM it was the other black stuff which was too much and made the gap between shield and die a little to big.
Other than that I am with you on this, lately the cpu die is getting smaller and smaller and what's taking a lot of space is actually the iGPU.
You can't ditch the iGPU because a LOT of pc's do not have dGPU right now. The pc at work now has a 4770 and no GPU. Think of ALL the office PC's which do NOT need video cards ( designers and such do not go in here).

The primary reason is competition. But yes, software is often not written well for multithreaded CPUs, and I feel as if programmers didn't take the whole multithreaded thing seriously at first and are now playing catch-up.
it wasn't taken seriously at first because the concept wasn't implemented correctly at first.


Agreed but the reality is that games are not the be-all and end-all. Plenty of applications can and do exploit multiple CPU cores and would benefit from 6-8 core CPUs being mainstream by now. Games would too, but perhaps not as much. The reason 6-8 cores are not in mainstream CPUs is because of...you guessed it...lack of competition.
6-8 cores are mainstream. AMD has a 8 cores at mainstream price, yeah intel has 6/8 cores also but charges a lot more because core per core and clock per clock it is offering more performance.
And those many applications that exploit multiple CPU cores are where? are we talking server apps, because servers have plenty cores.
Are you talking about video/image rendering? well it's simple if you're doing it for a job, then you should already have an xeon or double xeons at that. if you don't than what's the problem. You build a PC for what you use if for, if you want a PC that is good at everything then you pay.

With that being said, yes i would like next i7 generation successor of 1151 to have 6 cores and the 2011 successor to have 8+

As a side note, I hate the current development cycle for games where they are essentially released in beta form and patched until they work for the first few months. It also contributes to a damaging culture of "this'll do, we'll program it how we're meant to later" and, of course, there's never any time to do that later because which business is going to justify spending thousands of pounds on "investment" when they could keep churning out new sub-standard products to customers? Shareholders don't care about the long-term benefits of taking the time to do things properly, sadly.
Yes i do not like the development cycle at this moment for games(mostly the ones coming from big publishers).
That being said, i would NOT lay all the fault on the devs. There are deadlines and some times for the uppers to get a clean face, the lowers must cut some roads and build some shady lumber bridges. That's how i feel the current gaming industry works about now.
 
Back
Top Bottom