New algorithms to make parallel programming easy -

Associate
Joined
15 Jun 2006
Posts
2,178
Location
Amsterdam
According to DailyTech, the researchers have put together a prototype system made up of 64 parallel processors they claim is 100 times faster than current desktop PCs. Citing the example of hiring 300 cleaners to clean a house in one minute instead of hiring one to clean the same house in 300 minutes, project lead Uzi Vishkin explains, "The 'software' challenge is: Can you manage all the different tasks and workers so that the job is completed in 3 minutes instead of 300?" He goes on to say, "Our algorithms make that feasible for general-purpose computing tasks for the first time." Vishkin has been working on those algorithms since 1979 and began building prototype hardware to test them in 1997. DailyTech says he finally completed the prototype in December 2006.
 
300 cleaners in one house would just get in each other's way. I don't buy it.
 
This problem comes up again and again.

Its not that it can't be done, its that the "distributability" of a task is depend on the problem domain, and each and every problem domain is different.

If you have large independant datasets, you can massively parallelize processing, but if parts of the processing have dependency of other parts of the algorithm, you cannot proceed without first completing the dependency.

Most distribution research is related to analysis of the problem domain, solving synchronous and asynchronous data management issues and load balancing to optimize performance.
 
Yup the problem is really the data access and dependencies.

Once you get into it, the less C and any serial programming language makes sense. SQL is a good example of the set based approach that you need to take to easily optimise.
 
Back
Top Bottom