Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
Aye, imagine how many cores there will be 10 years down the line...
Come'on we had 3Ghz P4's 2 years ago, speed aint going up as quickly now, the increase seems to be efficency.?
Who agrees
It's about electron leakage.
The circuits within CPUs are so ridiculously small that electrons can - when they have energy levels slightly higher above their design limits - jump from parts of the circuit to others which they aren't meant to. This is one of the causes of the stability problems we know a lot about.
This is where quantum effects really become evident. The electron is essentially in a finite potential well and when it gets 'excited' to a higher energy level it tunnels to other parts of the circuitry like you say.
Also I remember reading somewhere about a canadian company that claims to have tested the first quantum computer, anyone got any more info on this?
The person/team/company that does this successfully will clean up. Which leads me to believe that nobody has practically implemented quantum computing beyond the basics yet.
The problem is getting above 7 quantum 'transistors' because containment becomes an issue, i.e. ensuring the photon/quant keeps the information stored on it. Think of the problems our computers would have if - all of a sudden - half the transistors stopped working.
Parallel computing is the new focus. Intel currently have in their R&D labs a working prototype of an 80 core CPU for example with homogenised "simple" cores. So say 20 are designed and dedicated to floating point ops, 20 to multimedia stuff, 20 to something else, etc etc.
So simpler cores, but many of them, seems to be the answer. The downside is that parallel programming is a whole new ball game, with current thinking needing a throurough work over to reap any sort of benefits from a massively parallel system, indeed a badly thought out parallel implementation can be slower than a well thought out serial one due to message passing overheads and wait times. Similarly if each core is simplified and slower then its also not feasible to do the current tactic of allowing a well made serial program to spread over multiple CISC cores, thus allowing multiple apps to be run at once, as the simplified cores themselves wont be able to handle it.
The revolution is a comin', and its going to require a lot of people to re-learn what they think they already know!
The main problem at the moment is with regards to lithography techniques, as far as I'm aware. Deep ultra violet is a major research interest for many companies, and so are soft x-rays. The problems with these is that at these wavelengths, you cannot use refractive optics very easily (UV) or at all (x-rays). Diffractive optics look very interesting, and I was reseaching Zone Plate Array Lithography very recently, which looks quite promising.The problem is that silicon is a limited material, and we are fast running into its barriers. 45nm processes are nearly as small as silicon can realistically be taken, thus limiting the transistor count.
It's about electron leakage.
The circuits within CPUs are so ridiculously small that electrons can - when they have energy levels slightly higher above their design limits - jump from parts of the circuit to others which they aren't meant to. This is one of the causes of the stability problems we know a lot about.
Stopping chips getting too hot, i.e. with aftermarket coolers, prevents the electrons getting too excited and jumping around in ways they aren't meant to, meaning the chip will be stable as it's asked to do more.
Come'on we had 3Ghz P4's 2 years ago
Further to comment that "that silicon is a limited material, and we are fast running into its barriers. 45nm processes are nearly as small as silicon can realistically be taken" .... 32nm is definitely in development ... and people are even talking about 15nm prototypes
http://www.theinquirer.net/gb/inquirer/news/2007/12/13/nanometre-memory-tested
(nb to pre-empt the "its memory and not a processor" response ... I think process development traditionally uses memory as one of the initial technology drivers)
Don't forget gaming once you factor in ray-tracing