• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Will we see 6Ghz cpu's?

Associate
Joined
16 Jul 2010
Posts
154
Location
London
On topic.

.Net framework 4.0 has native support for data parallelism. Far as I can tell it's following a SIMD model (Single Instruction Multiple Data). The parallemism everyone is talking about in this thead is MIMD (x86 instructions).

BTW SIMD is relative easy todo in parallel and is not new (1977 Star Wars special effects Gray supercomputer), also your graphics cards are based on SIMD, hense why so many cores. MIMD is very difficult to spread accross cores.

Question? Does anyone know how AMD is getting threads working on multiple cores (Bulldozer). I'm very interested to know how they are doing it, but could not find any technical artical.

Very good point i was thinking along similar lines, it seems more of a software programming constraint than hardware inability to perform in parallel.
 
Associate
Joined
29 Mar 2010
Posts
831
i wonder if an OS can be coded to use a multicore cpu as a single core cpu, so when you boot up into windows or apple OS and when you launch a program that only uses 1 thread that can be spread across each core.

Apple's grand central claims to do something along these lines. The software has to be written with support for it, but the results in some applications are supposed to be very impressive (I haven't looked at it myself, but that is what I have heard - take with a pinch of salt).

EDIT: just read the article Vader posted - the ability to give a real CPU to a single application can already be done in linux using cgroups, but it is a bit involved. I would definitely like to see this in other OSs. Might be a bit premature to think we could give a physical core to EVERY application, at least in the near future.
 
Last edited:
Soldato
Joined
11 Oct 2009
Posts
16,591
Location
Greater London
I believe architecture will be the next thing, we're hitting a wall in the manufacturing world, I can't imagine anything past 11nm on current silicon. I can see 5ghz being more realistic, possibly later down the road in 20 years time or so. We can't keep adding cores, we'll just have very large chips and that could cause problems.
It sounds interesting having a single thread spread across the CPU... But sounds painful to code the software to do this :p
 
Associate
Joined
14 Feb 2010
Posts
135
Soldato
Joined
6 Aug 2009
Posts
7,071
I wouldn't worry too much about multiple cores and parallelism taking off anytime soon. I remember buying a 64bit cpu a long time ago and only now are we really starting to see the mainstream software catching up. Mainstream is where the money is. Until the masses are mainly using PC's with lots of cores and there is a commercial advantage to producing faster more efficient software we'll be left with hardware that's not being used to its full potential.

That's not to say I wouldn't like to see or buy a 6GHz, 6+core cpu, I just don't see a need or the software appearing anytime soon...
 
Associate
Joined
14 Feb 2010
Posts
135
The move towards effective multi-core application processing will not be linear. When you can code for two cores, that breakthrough over with, four isn't so hard, then six, so on. When you have apps that can make proper use of multiple cores, I expect that sheer MHz will become an old fashioned idea very quickly.
 
Soldato
Joined
6 Aug 2009
Posts
7,071
The move towards effective multi-core application processing will not be linear. When you can code for two cores, that breakthrough over with, four isn't so hard, then six, so on. When you have apps that can make proper use of multiple cores, I expect that sheer MHz will become an old fashioned idea very quickly.

You may be right in the shorter term, in the long run I have my eye on graphene. We could well be laughing at 6GHz then. We're due a jump sooner or later, look at how storage has moved on. Who would have expected 3TB drives 10yrs ago?
 
Soldato
Joined
19 Jun 2009
Posts
3,874
The move towards effective multi-core application processing will not be linear. When you can code for two cores, that breakthrough over with, four isn't so hard, then six, so on. When you have apps that can make proper use of multiple cores, I expect that sheer MHz will become an old fashioned idea very quickly.

The issue here is making a MIMD thread (Multiple Instruction Multiple Data), execute over multiple CPU cores. This is something very difficult to achieve, especially where your dealing with an architecture based on non-shared CPU registers.

One way of achieving this is by using 'out of order execution' where an scheduler looks at batches of instructions that can be processed independent of any instructions prior to them. These batches are executed out of order and the results buffered, ready to be used when those instructions would normally have been executed.

The problem with the above technique is the further ahead you look for instructions to execute in parallel then the less chance of finding independent batches of instructions becomes.

Because of this, the law of diminishing return increases sharply as extra processor cores are added. I expect they will only acheive 2-4 processor cores working efficiency, at least at first, on the x86 style architecture.

If I was a betting man, I reckon Bulldozer will use a form of 'out of order execution' combined with traditional multi-core processing we are currently using. I expect the firmware on the chip will switch the CPU cores to work between the two modes dependent on utilization of the threads being executed.
 
Last edited:
Associate
Joined
14 Feb 2010
Posts
135
One way of achieving this is by using 'out of order execution' where an scheduler looks at batches of instructions that can be processed independent of any instructions prior to them. These batches are executed out of order and the results buffered, ready to be used when those instructions would normally have been executed.

Ah ok, I was thinking more in terms of future application development rather than the native hardware facilitation you describe. Ie, For regular applications programming, it is common to have a "main" thread spawn lots of "worker" threads to action specific tasks. The OS will then allocate each worker thread to the most available core, thus making better use of a multi core environment. It certainly adds an extra layer of complexity to application design but it's still something I expect to improve in the not too distant.
 
Soldato
Joined
19 Jun 2009
Posts
3,874
Ah ok, I was thinking more in terms of future application development rather than the native hardware facilitation you describe. Ie, For regular applications programming, it is common to have a "main" thread spawn lots of "worker" threads to action specific tasks. The OS will then allocate each worker thread to the most available core, thus making better use of a multi core environment. It certainly adds an extra layer of complexity to application design but it's still something I expect to improve in the not too distant.

Yes your correct that's how parallelization is typically done in applications. The problem here is you still need separate sections of code you can allocate to threads.

For example:

An email gateway thats reading emails, and applying some NLP (Natural Language Processing) logic to the email text. You can have a separate thread to receive/process each email. However if there is typically only 8 emails at a time to receive/process then any more then 8 CPU cores would be useless.

.Net 4 framework does have support for multi core programming, however it's SIMD (Single Instruction Multiple Data) vector style processing, not MIMD. MIMD over multiple cores is the next big goal.
 
Associate
Joined
13 Nov 2009
Posts
35
Lots of interesting comments here, you guys are really knowledgeable.

I found multithreaded programming very hard, even using libraries intended to help (TBB / Boost). But then, I am no professional.

Here's how i look at multithreading and its limitations, please correct me if i'm wrong.

You can chop a movie into 100 pieces of 1 minute to encode each bit in a single thread. The individual parts won't be interdependent (or hardly anyway). On the other hand, sorting a big list (eg. a scrambled phone-book) that is chopped in 100 pieces would require an insane amount of communication between threads, with nasty locking required when threads need to access eachothers data, causing really hard to find bugs if done wrong. They might have a lock-less solution for this particular problem but you get the point.

I found it an unnatural way of programming, sequential execution is part of how i learnt what i know, so.......


.Net 4 framework does have support for multi core programming, however it's SIMD (Single Instruction Multiple Data) vector style processing, not MIMD. MIMD over multiple cores is the next big goal.


.....would seem the way forward to me. I hadn't heard of it before reading this thread but it looks just the thing for me.
 
Associate
Joined
9 Jul 2010
Posts
568
Well I would rather have "raw speed" whether this is through higher clocks or more cores. Intel seem to be going out the more cores root, and as previously said the clock ran between amd/intel has gone since P4's.
 
Man of Honour
Joined
4 Jul 2008
Posts
26,418
Location
(''\(';.;')/'')
Depending on the way you look at it - we already have them. If you take a 3Ghz P4 for example, and compare it with a i7 with one core enabled, the i7 would be over twice as fast. Technically 6Ghz then? :p:p

In numbers however, I think they will just increase efficiency and core numbers, rather than raw frequency.
 
Soldato
Joined
1 Mar 2010
Posts
14,373
Location
5 degrees starboard
For example the human brain, or any sentient animal.

It is not the speed of the data, base frequency, it is the number of links and efficiency between them that mean some animals (and humans) are brighter than others.

andy.
 
Associate
Joined
4 Mar 2010
Posts
32
is it not the fact the as the ghz increase so does the temp and they will get to a point where the silicon cant handle the heat which is why they have gone down the route of more cores. but there will be a maximum number of cores and ghz the silicon can handle so until they come up with a complety new desgin i doubt you will see masive improvement in clock speed and a maximum amount of cores. intel and amd probably already know what those limits are.
 
Soldato
Joined
11 Apr 2008
Posts
3,907
Location
Sheffield
6ghz in 2years? Probably not, but I'd easily say that in the next year or so we'll see 4-6cores with stock speeds of 4-4,5ghz.

Even the old phenom IIs already done a refresh with 970BE @ 3.5ghz stock, this probably could have been easily increased to 3.7ghz.

Compare it to first dual / quad cores which were around 2ghz mark.

Sure the number of cores are increasing but so are the clock speeds.

What 2-3yrs ago was considered a highly OCed dual/quad is below todays stock speeds.

As I've said earlier, 4,5ghz+ stock 4-6core sandy bridge or buldozzer probably around late early 2012 easily.
 
Back
Top Bottom