• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Multicore Is Bad News For Supercomputers

How about a function built into chips, or programmed into an OS, that shifts around what core threads are processed on.
For example, lets pretend you have a quad, and 100 background processes running.
Something shifts all of these processes onto core 3, leaving 0,1,2 to play crysis?

That is VASTLY oversimplified, but you get the general idea. Could it work well on 80-core machines?
 
Some maybe, but not the majority, there are many algorithms that are simply serial in nature. Best we can do for the majority of things is split into as many seperate data streams/algorithms as possible, and run each of those serial streams on a seperate core, but then how many streams/algorithms are there to actually process for any one thing?

80? doubt that, even 4 is possibly overkill in terms of threads that can actually fully utilise a modern day x86 core in anything other than a few select tasks.

Intel in a research paper published 6 months ago or so reckon that majority of tasks can be re-written to be parallel in nature.
 
Natima, that's called a scheduler and is in most OS's, and although you've got 100 threads only a handful are actually in a 'runnable' state at any one time (Firefox for example, if you don't click the mouse anywhere, or type, it's placed into a waiting state and does nothing).

As a programmer I don't see how, the majority of algorithms depend on one calculations result for the next calculation, and there is no way to make that parallel, closest you can do is pass the data back as soon as it is calculated rather than waiting longer for it to be written back to the register bank.

Did they specifically mention 'tasks' (eg individual algorithms) or tasks as in 'playing a game' (where there are several different tasks going on, and where it can be split across multiple cores), if it's the latter than they are just repeating Amdahl's Law, just stating that the level of parrallelizable code is higher than Amdahl thought.

Intel are biased though, but then they could be right, we'll see.
 
Yes, but plenty of code that is serial right now can be re-written as parallel given enough skill/time.

this is the only really relevant thing, a lot of the top developers in the industry, gaming, encoding, all other area's simply believe there isn't enough skill available. If say 0.5% of coders are actually capable of doing good enough code to make things highly paralel they will be so massively highly sought after that they will get paid soo much that it will simply become not worth employing them.

Theres probably quite a lot of latitude in terms of using more complex maths to do things differently that might infact take more clocks to do and uses more power, but the power becomes spreadable across more cores. Is it better than one thread is as efficient as possible using the simplist and quickest code possible, or use slower more complex code but is able to use more cores which enables it to be done quicker overall.

I'm trying to think of a good example but nothing springs to mind.
 
Intel were talking about general applications (be it CAD/ Games / etc), I'll try to track down the paper if I can later.

And perhaps drunkenmaster is correct regarding the skill required. Having said that, I'd hope that whoever is developing software right now for these companies is a capable programmer and is capable if learning a new skill - but given the poor quality of some things recently, I do have my doubts.
 
Intels new QuickPathInterconnect (QPI) system - when it fully gets announced- will give memory one hell of a boost.
 
So the problem is that the chips used today in supercomputers aren't actually designed for supercomputers. This doesn't matter as they can be made to work by deploying them within a supercomputer architecture with beefy memory buses. However, if the chip manufacturers go a step further and lump all the cores together the supercomputer guys aren't able to deploy their beefy buses so are left with more mundane system archtecture.

The solution seems to be to design processors specifically for supercomputer applications - but this low volume market wouldn't be economic, or to rephrase the computational tasks so they are better suited for the available multi-core architecture.
 
I think Apples Grand Central has some promise, like an all powerful task manager for multicore processors. And the openCL specification should give computers a nice boost for a few years.

But to solve the bigger picture I think we need a new way of thinking, EG replace x86 and start fresh, carbon processors anyone? Well I don't no but there must be a way to solve the parallel computing problem/ memory limits.

XD-3
 
But to solve the bigger picture I think we need a new way of thinking, EG replace x86 and start fresh, carbon processors anyone?

Can that fresh start be the Cell?

All we need is for MS to actually port Windows to it :(
 
Back
Top Bottom