• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Will we see 6Ghz cpu's?

Soldato
Joined
19 Jun 2009
Posts
3,874
Back in 1996 for my final degree project (system architecture), I wrote a superscaler pipeline simulator that could take a 8086 program (assember), and execute this over 2 simulated 8086 CPU's (MIMD - Multiple Instruction, Multiple Data). It was in fact a dual core 8086, and all clock cycles were based on real 8086 timings.

It could execute the single 8086 assember program (that you could program yourself in a text editor), and using 'out of order execution' would look for sections of code (in the prefech buffer) that could be done out of order. When it found these sections it would execute them in parallel on the other core, thus making a single thread execute over 2 cores. When the section was executed it would store the results, when other sections had executed (in other core), it would look at the value(s) it calculated, and return them immediacy from cache. The main thread of the program was none the wiser as final program output was the same.

As well as parallelization from 2 cores, parallelization also happened within the pipelines (IFETCH, IDECODE, IEXECUTE, IINTERUPT), so an instruction could be in the IDECODE part of the pipeline, while a next instruction is in the IFETCH section etc. Depending on the assember program the pipeline would be pritty full, however if you had a program that would not scale well then the pipeline would not be filled up. There was also situations the second core was not used, again only dependent if a section of code could be done out of order.

To test everything you could turn functions on/off to the point it was a basic 8086, and see variable and register results in seperate windows. When the program ran it would count the clock cycles being used. You knew it was working when the clock cycles changed for different settings, but the results were the same.

All the above was written in Visual Basic 4, I got an A for the project. Work in software, but this was the closest I ever got to chips, lol

I remember talking to the Systems Architecture lecture and we talked about how the Mhz speed was being reached, and in the future computers would have to go multi CPU. We also use to talk about the law of deminising returns, and less gain the more you try to execute in parallel. This was 1996 remember.

When I was doing research for the project I looked into Transputes, also the out of order execution is not new and first appeared on IBM mainframes in the 60's.
 
Man of Honour
Joined
13 Nov 2009
Posts
11,596
Location
Northampton
In the next two years?

We won't see it with a commercial CPU and I doubt we will ever see it with silicon either unless someone decides to go down the netburst route again. Would be interesting to see what kind of clockrate a P4 would do on 32nm.
Wasn't net burst highly inefficient and slow considering its clock speeds
 
Soldato
Joined
5 Sep 2009
Posts
2,584
Location
God's own country
That's not geeky at all. That's interesting!

EDIT: In response to OP, who needs 6Ghz when extra cores will be more efficient than raw horsepower of Ghz?
I think 6Ghz will not be mass achieved on silicon, as has been insinuated by manufacturer's move to more cores instead of more Ghz.
 
Last edited:
Soldato
Joined
18 Oct 2002
Posts
19,354
Location
South Manchester
I doubt it. CPU clock speeds haven't increased since Netburst hit the wall in late 2004 ish.

More efficient processors with specialised instructions like SSE and multiple cores are the way forward for the foreseeable future.

3.5GHz+ is possible but the non-air cooling solutions required are not really feasible for the mass-market.
 
Associate
Joined
3 Sep 2009
Posts
125
Perhaps multi cpu boards and non server cpu's allowing for this configuration will become the norm for home user pc's as opposed to high end workstations as is the case now
 
Soldato
Joined
18 Oct 2002
Posts
19,354
Location
South Manchester
No need for multiple CPU boards with multiple-core chips. It's the same end result with a lot less hassle. Multiple CPUs need to be matching steppings, ideally from the same batch. It's far easier to pop out your Core 2 Duo and pop a Core 2 Quad in instead.

Intel have also removed multiple CPU support from the chipsets. The last desktop chipset with multiple CPU support that I'm aware of was the 440BX which spawned great products like the Abit BP6 and the Asus P2B series ... over 10 years ago.
 
Soldato
Joined
26 Jun 2009
Posts
3,023
Location
Sheffield
Multiple cores are all very well, but the fact is a lot of software isn't very good at utilising multiple cores, so raw clock speed still has a large part to play if you ask me.

I reckon there will be a mixture of both, in a couple of years 5/6ghz 8 or 12 cores will be pretty normal if you ask me.
 
Associate
Joined
20 Aug 2009
Posts
1,192
Location
Local to someone
We are seeing 6GHz+ on Phenom IIs and most of the Intel range if you take LN2 into account.

But these chips burn out very quickly due to the volts run through them don't they? LN" is hardly an everyday coolant. I am with the board here, better software efficency rather than bruteforce required imo
 
Associate
Joined
21 Apr 2008
Posts
1,536
Location
Manchester
i wonder if an OS can be coded to use a multicore cpu as a single core cpu, so when you boot up into windows or apple OS and when you launch a program that only uses 1 thread that can be spread across each core.
 
Soldato
Joined
18 Oct 2002
Posts
19,354
Location
South Manchester
i wonder if an OS can be coded to use a multicore cpu as a single core cpu, so when you boot up into windows or apple OS and when you launch a program that only uses 1 thread that can be spread across each core.

That's a retarded idea.

Modern OS's dynamically allocate threads across the cores at the moment. Given the a typical Windows box has at least 50+ processes running concurrently many with multiple threads there's plenty of scope for the OS to spread the load across multiple processing units.
 
Associate
Joined
16 Jan 2005
Posts
641
Location
Laaaandan
MagicBoy said:
That's a retarded idea.

Modern OS's dynamically allocate threads across the cores at the moment. Given the a typical Windows box has at least 50+ processes running concurrently many with multiple threads there's plenty of scope for the OS to spread the load across multiple processing units.
That's not a retarded idea at all. And sure, while operating systems spread threads across multiple processors well, they do not spread the work correctly since that currently needs to be done by the programmer. This is why you can easily get one hard-working thread and many near-idle threads - this does not yield a balanced load across a CPU.

Phatzy said:
Although a good idea, the problem is that the general CPU design 'philosophy' (along with normal programming in general) is very different from a model which is compatible with your idea. Much work is being put into this though (check out some of last years proceedings at isscc.org), for instance AMD's FPU sharing in Bulldozer does something a bit like this.
 
Soldato
Joined
18 Oct 2002
Posts
19,354
Location
South Manchester
That's not a retarded idea at all. And sure, while operating systems spread threads across multiple processors well, they do not spread the work correctly since that currently needs to be done by the programmer. This is why you can easily get one hard-working thread and many near-idle threads - this does not yield a balanced load across a CPU.

Absolutely, I'm not disputing that. Given that in x86 land we've had multi CPU systems and SMP support in Windows for 15+ years I was trying to keep it higher level so purposely deleted something very similar before clicking post.

It's still retarded tho.
 
Soldato
Joined
19 Jun 2009
Posts
3,874
On topic.

.Net framework 4.0 has native support for data parallelism. Far as I can tell it's following a SIMD model (Single Instruction Multiple Data). The parallemism everyone is talking about in this thead is MIMD (x86 instructions).

BTW SIMD is relative easy todo in parallel and is not new (1977 Star Wars special effects Gray supercomputer), also your graphics cards are based on SIMD, hense why so many cores. MIMD is very difficult to spread accross cores.

Question? Does anyone know how AMD is getting threads working on multiple cores (Bulldozer). I'm very interested to know how they are doing it, but could not find any technical artical.
 
Last edited:
Soldato
Joined
19 Jun 2009
Posts
3,874
That's a retarded idea.

Modern OS's dynamically allocate threads across the cores at the moment. Given the a typical Windows box has at least 50+ processes running concurrently many with multiple threads there's plenty of scope for the OS to spread the load across multiple processing units.

The British have already invented an OS for doing this. It was the transputer, basically seperate CPU modules chained together. The problem is it required a custom programming language, it was called OCCAM. It failed as the language was very specilised and you had to program for certain tasks.

You do highlight a good point, modern OS's are windows based (and based on x86 model), yet in perfect world parallel computing requires a different model. For this reason solution in the short to medium term will be firmwear on the individual CPU's, and very little involement from the OS. Also backwards compatibily from OS's to work with older CPU's (at least anything upto i7 maybe beyond) will put effort in this area on firmware of new CPU's. The transputer model was done as one design (hardware / OS / language), no backwards compatibility required unlike PC's where almost everything is backwards compatiable, and this hinders new design.

Transputer link
http://en.wikipedia.org/wiki/Transputer
 
Last edited:
Back
Top Bottom