• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Is Moores Law slowing down?

Yup, your right, just me being entirely ignorant ! Guess i need to remember things properly before trying to make inteligent posts lol Failed miserably !
 
As others have pointed out Moore's law is about number of transistors and not processor frequency. However, it is also important to realize that the performance of a processor is not just a matter of clock frequency but is also related to the micro-architectural efficiency of the design.

All modern processor designs are pipelined but different lengths have different efficiencies. At a very crude level you can run a design at a higher frequency if you split the execution in the pipeline into a greater number of smaller steps. Since each stage has less work to do it takes less time so the clock frequency can be increased ... but because there are more stages then the overall time actually is probably much the same.

In late 1990s the big selling feature of processors was definitely clock frequency with Intel and AMD racing for the first 1GHz processor. In this atmosphere Intel moved to the netburst microarchicture which had a very long pipeline (20+ stages compared to, I think, 9 in Pentium 3 and ?11 in Core) which meant when the launched it they expected to be able to reach 4-5GHz in due time.

However, there are some significant downsides with ever increasing pipeline lengths as while longer pipelines mean each stage has less wotrk to do which allows for higher clock speeds there are also some overheads to the cycle time which are effectively fixed (to dop with things like variability in the signals, especially clocks, propogating across the chip) and as freqeuncies increase these take an increasing chunk out of the the cycle time. Also, the faster the clock speed then the faster logic in the chip is switching (especially the clock signals!) and this adds up to increased power.

All this meant that the netburst architecture turned out to eb a dead-end ... core is now derived from an update to the Pentium-3 microarchitecture that was developed for mobile processors (Pentium-M) where instead of going blindly for speed they evaluated every enhancement for performance and pwoer implications to ensure that they only made chanegs that were beneficial for power/performance ... however as this has a much shorter pipeline it doesn't need to have the same clcok frequency as Pentium-4 did ... so Moore's law means that its now possible to have dual/quad cores with several MB of level 3 cache on a single processor die which gives a processor much more powerful than a Pentium-4, the more efficient microarchitecture (designed with power/performance as main target and not the macho frequency target fo netburst) means the clock frequencies haven't has to increase .
 
Come'on we had 3Ghz P4's 2 years ago, speed aint going up as quickly now, the increase seems to be efficency.?

Who agrees

Ghz isnt everything though, infact its meaningless if the architecture is inefficient. In terms of 'processing power' a single core from a 'Core 2 Duo' running at 1.8Ghz is actually 'faster' in real terms than a Northwood or Prescott P4 running at 3.2Ghz.

As correctly stated in this thread Moores law is about the number of transistors and its still correct at the moment.

P4 was a freak, probably a mistake, it was all about 1 thing, Ghz.. Core 2 Duo went back to basics (p3 architecture), make some major improvements, and is the true successor to the 80X86 series processor.
 
i like to think of it in real terms like ghz are a bit like amd mhz nowadays 3ghz quad core = 12 ghz.... silly crazy fast.... (thou i know this is very much untrue and you'll never get that sort of performance out of the proceesor)
 
It's about electron leakage.

The circuits within CPUs are so ridiculously small that electrons can - when they have energy levels slightly higher above their design limits - jump from parts of the circuit to others which they aren't meant to. This is one of the causes of the stability problems we know a lot about.

This is where quantum effects really become evident. The electron is essentially in a finite potential well and when it gets 'excited' to a higher energy level it tunnels to other parts of the circuitry like you say.

Also I remember reading somewhere about a canadian company that claims to have tested the first quantum computer, anyone got any more info on this?
 
Parallel computing is the new focus. Intel currently have in their R&D labs a working prototype of an 80 core CPU for example with homogenised "simple" cores. So say 20 are designed and dedicated to floating point ops, 20 to multimedia stuff, 20 to something else, etc etc.

The problem is that silicon is a limited material, and we are fast running into its barriers. 45nm processes are nearly as small as silicon can realistically be taken, thus limiting the transistor count.

Similarly just ramping up the Ghz isnt feasible with silicon, or indeed many materials, for obvious electrical and heat issues.

So simpler cores, but many of them, seems to be the answer. The downside is that parallel programming is a whole new ball game, with current thinking needing a throurough work over to reap any sort of benefits from a massively parallel system, indeed a badly thought out parallel implementation can be slower than a well thought out serial one due to message passing overheads and wait times. Similarly if each core is simplified and slower then its also not feasible to do the current tactic of allowing a well made serial program to spread over multiple CISC cores, thus allowing multiple apps to be run at once, as the simplified cores themselves wont be able to handle it.

The revolution is a comin', and its going to require a lot of people to re-learn what they think they already know!
 
This is where quantum effects really become evident. The electron is essentially in a finite potential well and when it gets 'excited' to a higher energy level it tunnels to other parts of the circuitry like you say.

Also I remember reading somewhere about a canadian company that claims to have tested the first quantum computer, anyone got any more info on this?

The person/team/company that does this successfully will clean up. Which leads me to believe that nobody has practically implemented quantum computing beyond the basics yet.

The problem is getting above 7 quantum 'transistors' because containment becomes an issue, i.e. ensuring the photon/quant keeps the information stored on it. Think of the problems our computers would have if - all of a sudden - half the transistors stopped working.
 
Moores Law isn't slowing down. This was discussed at a seminar I attended for my job with the worlds leaders in Data Centres. The problem is we are getting to the capacity at which we can effictively cool, hence why the manufacturers are now concerntrating on efficiency and multiple cores. But the actual density of computing power we can fit inside the same space is still adhearing to moores law.

Throughout the world of data centres, most are at their full capacity for cooling and power, but have tonnes of rack space yet to fill!
 
Check it out...one of the slides.....

wattschip.jpg
 
The person/team/company that does this successfully will clean up. Which leads me to believe that nobody has practically implemented quantum computing beyond the basics yet.

The problem is getting above 7 quantum 'transistors' because containment becomes an issue, i.e. ensuring the photon/quant keeps the information stored on it. Think of the problems our computers would have if - all of a sudden - half the transistors stopped working.

Someone from IBM did a keynote speech on quantum computing at MPF 2000 (same meeting where Intel launhed netburst + also described the initial low power pentium-m developments) ... he described how they had built a "computer" that used quantum states to evaluate factors of 4-5 bit values - due to the way quantum systems effectively compute all states in parallel you basically wrote in the value and could then read out the factors ... he added a comment that if (and it was a big if!) they could scale this to larger numbers then implications were quite "interesting" - basically all current computer encryption schemes such as RSA which rely on the factoring large numbers being "difficult" would become obsolete!
 
Yup - which is why every single government in the world is absolutely terrified of quantum computing, because it would make all of their encrytion efforts to date pointless.

But then, that's where quantum encryption comes in... but then it will always be a case of escalation...
 
Parallel computing is the new focus. Intel currently have in their R&D labs a working prototype of an 80 core CPU for example with homogenised "simple" cores. So say 20 are designed and dedicated to floating point ops, 20 to multimedia stuff, 20 to something else, etc etc.

So simpler cores, but many of them, seems to be the answer. The downside is that parallel programming is a whole new ball game, with current thinking needing a throurough work over to reap any sort of benefits from a massively parallel system, indeed a badly thought out parallel implementation can be slower than a well thought out serial one due to message passing overheads and wait times. Similarly if each core is simplified and slower then its also not feasible to do the current tactic of allowing a well made serial program to spread over multiple CISC cores, thus allowing multiple apps to be run at once, as the simplified cores themselves wont be able to handle it.

The revolution is a comin', and its going to require a lot of people to re-learn what they think they already know!

Highly parallel computing is not suitable for the vast majority of home users. The number of applications that can be effectively parallelized in that domain really aren't all that high.

Scientific and mathematic applications tend to lend themselves better to these areas, however.

The problem is that silicon is a limited material, and we are fast running into its barriers. 45nm processes are nearly as small as silicon can realistically be taken, thus limiting the transistor count.
The main problem at the moment is with regards to lithography techniques, as far as I'm aware. Deep ultra violet is a major research interest for many companies, and so are soft x-rays. The problems with these is that at these wavelengths, you cannot use refractive optics very easily (UV) or at all (x-rays). Diffractive optics look very interesting, and I was reseaching Zone Plate Array Lithography very recently, which looks quite promising.

The problem with simply increasing clock speeds is that pretty soon you run into relativistic effects, and you limit the physical distance data (electrons) can travel through the system/CPU with each clock cycle. At 3Ghz, something travelling at C travels 10cm each clock cycle. The data isn't going to be travelling at C, though, and you can imagine that at 20GHz, where each clock cycle it's travelling less than 1.5CM you soon run into issues as parts of the system have to wait for the physical transfer of data... so you try to make the system smaller....

The problem with making things smaller, however is that you soon start to run into quantum effects. Effective confinement of the electrons in the semiconductor depletion region becomes more and more difficult.
 
Last edited:
It's about electron leakage.

The circuits within CPUs are so ridiculously small that electrons can - when they have energy levels slightly higher above their design limits - jump from parts of the circuit to others which they aren't meant to. This is one of the causes of the stability problems we know a lot about.

This is quantum tunnelling.

Stopping chips getting too hot, i.e. with aftermarket coolers, prevents the electrons getting too excited and jumping around in ways they aren't meant to, meaning the chip will be stable as it's asked to do more.

But what you talk about here sounds like you're referring to electromigration (rare these days, but still possible). Remember difference in thermal energy of an electron at 300K (say a cool CPU, 30 degrees C) and 330K is only 10%...
 
Further to comment that "that silicon is a limited material, and we are fast running into its barriers. 45nm processes are nearly as small as silicon can realistically be taken" .... 32nm is definitely in development ... and people are even talking about 15nm prototypes

http://www.theinquirer.net/gb/inquirer/news/2007/12/13/nanometre-memory-tested

(nb to pre-empt the "its memory and not a processor" response ... I think process development traditionally uses memory as one of the initial technology drivers)
 
Further to comment that "that silicon is a limited material, and we are fast running into its barriers. 45nm processes are nearly as small as silicon can realistically be taken" .... 32nm is definitely in development ... and people are even talking about 15nm prototypes

http://www.theinquirer.net/gb/inquirer/news/2007/12/13/nanometre-memory-tested

(nb to pre-empt the "its memory and not a processor" response ... I think process development traditionally uses memory as one of the initial technology drivers)

I did say nearly :p

Either way, its a finite route for development, arguably this would be ok, if we didnt progress to the next fabrication size down so quickly or so often. Clearly there is no readily available contender to the silicon / electrical combination for producing IC's with quantum computing quite a way off and alternatives such as light pathways not really realised yet. Therefore I can assure you the current thinking in large organisations such as Intel is to go highly parallel ASAP. They even have dev kits freely available to debug your parallel code.

All software can be parallelised, though it is true to say number crunching apps such as models are going to see better gains.

The idea though is to have non homegenised cores suited to one specific task, be that multimedia calculations or FPC's etc. Therefore if your writing an codec to decode HD content for example, you design it so that the 20 or so "multimedia" cores will take that load and process it as quickly as is possible, similarly if your doing FP calculations then the 20 to 40 FP centric cores will take that load, leaving the rest to do what they do best.

Ideally the traditional serial programming style will be maintained while the compiler etc will do all the work for the programmer. Realistically however the actual structure of a program needs to be designed with parallel operation in mind.

Once we have massively parallel implementations such as this, Moores law bcomes less clear cut, though I still believe it to be a reasonable approximation (which is all it was ever intended to be).
 
Last edited:
Don't forget gaming once you factor in ray-tracing


A website did a comparison of raytracing and raster techniques used at the moment... very interesting.

It basically came to the conclusion that raytracing was far and away much better than raster graphics for some applications but simply couldn't do others; basically, the ideal graphics platform would be a hybrid of the two, simply because the limitations of both were too significant to by effectively mitigated. I can't remember what website it was (some dedicated 3D website) but it was a very good article.

Daz: do bear in mind that I'm a metallurgist/accountant and not an electronic engineer and am quite likely to get the two confused. :)
 
Back
Top Bottom