• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Is Moores Law slowing down?

Soldato
Joined
8 Sep 2003
Posts
23,173
Location
Was 150 yds from OCUK - now 0.5 mile; they moved
Come'on we had 3Ghz P4's 2 years ago, speed aint going up as quickly now, the increase seems to be efficency.?

Who agrees
 
This crossed my mind the other day actually. My dad has a 2.6Ghz Celeron that is years old, and I'm currently fixing a 3Ghz P4 for a mate.
Yes, definately seems to be improvement on efficiency doesn't it. It seemed to rocket up to around 3Ghz, then halt as duals and quads and revised chips are released. There's no dount that CPUs are quicker though.

You're prob just greedy like me and want a 10Ghz quadcore ;):p
 
Last edited:
moores law isnt simply "speed will double". its taken out of context

its the number of transistors every two years will double, so its still right imo. this is usually taken to mean power will increase as such but it doesnt need to.
 
Last edited:
moores law isnt simply "speed will double". its taken out of context

its the number of transistors every two years will double, so its still right imo. this is usually taken to mean power will increase as such but it doesnt need to.

isnt it every 18 months?...

transisters has little to do with Ghz right?.

the thing im curious about is why no1 has made a CPU say double the size of the current ones, it would have 4 times as many transistors, and it should be reasonably faster right.?, ofcourse new mobo and prob a bigger case but i think it would be a good idea?
heres some quotes

Researchers from IBM and Georgia Tech created a new speed record when they ran a silicon/germanium helium supercooled transistor at 500 gigahertz (GHz).

In early 2006, IBM researchers announced that they had developed a technique to print circuitry only 29.9 nm wide using deep-ultraviolet (DUV, 193-nanometer) optical lithography. IBM claims that this technique may allow chipmakers to use current methods for seven years while continuing to achieve results forecast by Moore's Law

however if you just consider the speed of the processor, not just Ghz, you must take into account "Wirth's law"
Software gets slower faster than hardware gets faster.
 
Last edited:
heat problems i guess to your questions. larger the die will also have larger amount of problems in manufacturing and less profit per wafer?

and he states it is every two years although he has been quoted at 18months i dont know where from. the first mention of it is in a mag intel bought a edition of last year and it clearly states 2 years.

yep little to do with Ghz but it answers the point of "efficiency" previously raised and takes into account dual cores and quads etc. instead of the frankly crude basis of clock speed (which should only be used when comparing chips within the same family)
 
saying with heat, then how come enthusiasts (sorry for bad spelling, if it's wrong) can overclock the CPU's with 3rd party coolers, if the chip manufactures change their coolers then wouldn't they be able to increase speeds past 3Ghz?
 
yep little to do with Ghz but it answers the point of "efficiency" previously raised and takes into account dual cores and quads etc. instead of the frankly crude basis of clock speed (which should only be used when comparing chips within the same family)

In 1965 he said transistor counts would double every year. It wasn't until 1970 that this started to be called 'moores law' though.

1975 he changed his prediction to a doubling every 2 years.

Wikipedia said:
Despite popular misconception, he is adamant that he did not predict a doubling "every 18 months."

 
Last edited:
moores law isnt simply "speed will double". its taken out of context

its the number of transistors every two years will double, so its still right imo. this is usually taken to mean power will increase as such but it doesnt need to.


This man is precise and correct. ignore clock speeds, its transistors you need to pay attention to.
 
saying with heat, then how come enthusiasts (sorry for bad spelling, if it's wrong) can overclock the CPU's with 3rd party coolers, if the chip manufactures change their coolers then wouldn't they be able to increase speeds past 3Ghz?

It's about electron leakage.

The circuits within CPUs are so ridiculously small that electrons can - when they have energy levels slightly higher above their design limits - jump from parts of the circuit to others which they aren't meant to. This is one of the causes of the stability problems we know a lot about.

Stopping chips getting too hot, i.e. with aftermarket coolers, prevents the electrons getting too excited and jumping around in ways they aren't meant to, meaning the chip will be stable as it's asked to do more.

Why don't they bundle these massive coolers with the chips? First off: cost... the stock coolers Intel and AMD bundle with their chips probably cost them about £1.50 each. A massive copper constuction would push their chips' prices a lot higher.

"But what about higher clock frequencies achievable with those coolers?" I hear you ask. The chips need to function at a given voltage with a given heat dissipation, and if they can't do that, then they aren't any good for the majority of PC users, i.e. Joe Bloggs.

This is because all chips have errors built into them: the ones we've noticed in Yorkfield and Barcelona are just a bit more obvious. If there are loads of errors in a chip, it needs more voltage to work at a given frequency, which is why its normally the lower speed chips released first - i.e. whilst they sort the problems out. Which is also why the first CPUs are not brilliant overclockers (yes, Conroe was great out the box, but that's because it's a Pentium M with knobs on, which has been around for quite a bit longer).

This also works the other way round: towards the end of a processor run, chips will overclock miles better than the first ones on the market which is why we see AMD chips now hitting 3.7GHz with air, etc. As the processes become more refined, the chips have less errors in them and can work at the same frequency with less voltage. And are subsequently sold for less. This is called speed-binning.

Fabrication processes typically have maximum frequencies until they're no longer a practical fabrication option for chip makers. Sure, we can overclock Core2 chips to hit 3.2-3.8GHz without too much drama, but for big OEMs, any drama at all means a returned PC, loss of reputation, etc.
 
Come'on we had 3Ghz P4's 2 years ago, speed aint going up as quickly now, the increase seems to be efficency.?

Who agrees

Not really... While the clock speed may not be sky rocketing that's only one part of a CPU's performance...

We have 3ghz Core 2 Duos which seem to absolutely crush the 3ghz P4's in each and every way, be it performance, power consumption, multi tasking, noise, everything.

Clock speed is an easy but electronically inefficient way to increase processor pushing power.
 
So we should have an exponential graph, which we dont.

edit: ah sorry, the y axis is in log y.
 
If anything it is speeding up - thanks to the multi core revolution. Although if AMD doesn't buck their ideas up soon we could be in for a slow down as Intel would have nothing to compete against.
 
Imagine how much of future CPU transistors will just be Cache in the future. In 2020 i predict we will have CPU with 256mb of cache making up 90% of the transistors on them.
 
My take on it is,

Manufacturers don't want to throw money at increasing clock speed when they can just add more cores to divide the computations up, four cores each doing a quarter of the work is always going to be faster than one doing the whole lot, (This only works with binary equipment, as we all know, 1 human is allways quicker than 4).

With clock speed at a fairly stable level, they've been able to concentrate on other areas of the processor which can give an increase in perceptive speed. For example, my 5 year old XP2400+ has a mere 256kb cache, the latest chips have anything between 1 and 4Mb, that's a massive increase over what used to be available.

I feel we won't see any more major increases in clock speed untill the heat and electron migration can be overcome, it's all going to be coming from increased cache and more cores for a while I reckon.
 
90% sure thats a RISC (reduced instruction set) chip, not a CISC (complex) chip, if we all had RISC chips we'd be having higher speeds right now. But programs would have to be rebuilt from scratch, which isn't really viable unless your IBM.

I think the material that the chips are made out of has to be refined further before we really start seeing high ghz.

Actually the current generation chips are RISC except that they don't expose those instructions to software. Intel calls them "micro operations" or "uops". Whereas the CISC instructions (like x86 and AMD64) are referred to as "macro operations".

Also there is no (or at least very very little) correlation between clock speed and whether the chip is RISC or CISC.
 
Possibly, although both Intel and a couple of other companies have some rather nifty tricks up their sleeve...

Nothing like using diamond for the actual chip, and using photons instead of electrons... it's still a couple of years out, but - on paper - the research looks impressive.


The next paradigm shift will come with quantum computing whenever that becomes a practical consideration... IBM have made a calculator out of one using only six or seven quantum 'transistors', but pretty much every research group is having problems scaling it past seven of these 'transistors'.

The problem with just adding more cache to processors is that its a really expensive way of improving performance. It works well, but it means the processor winds up costing huge amounts. Think of Intel's first Extreme Edition: the only way they could get their P4 architecture to compete with A64 was by bolting an enormous cache onto it... or rather, using the Hallatin Xeon core and squeezing it into a P4 costume.

Cache helps in a lot of situations, but it doesn't mean much if the processor can't do much with the data in the first place.

As transistors continually shrink, the only was of making chips faster is by bolting more cores onto the die. This again has the limits of heat dispersal to overcome as packing more and more into the same space gives rise to (no prizes) increased heat. Plus then comes writing applications for such processors to use effectively. The Xbox 360 and PS3 have helped the gaming market tremendously in this regard, but it remains to be seen whether the rest of the application market can keep up, too.

What may happen is chip designers start using the cores themselves as they would transistors: splitting chips down into component parts by having x number of cores for arithmetic, y number of cores for FP, etc.

Either way, processors of two years' time - let alone five - are going to be massivemly more complicated than what we're playing with today.
 
Back
Top Bottom