• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Will 1 sec 1 Million Super Pi ever be achieved?

Soldato
Joined
26 Jun 2009
Posts
3,023
Location
Sheffield
Single threaded performance is indeed very important if not more important than multithreaded performance. I'd rather have 1x8GHz than 4x4GHz any day.

Your right! That's why all the chip manufacturers are moving away from multi core setups and towards single cores again, because it makes more powers!

Wait, wait, hang on....
 
Associate
Joined
9 Jan 2006
Posts
1,375
To be fair to the guy (and given he may have been a bit loose with his maths), if they could have just upped clock frequencies indeffinately, I think they would have done. I'd rather have a single 16ghz core than 4 at 4ghz each. And I know coders would have preferred it.
 
Caporegime
Joined
12 Mar 2004
Posts
29,913
Location
England
Your right! That's why all the chip manufacturers are moving away from multi core setups and towards single cores again, because it makes more powers!

Wait, wait, hang on....

What on earth are you on about?

Chip manufacturers moved to multi core because of the inability to increase clock speeds, that is the performance of a single core, performance of single threaded applications is reduced on multi core systems and many pieces of software cannot be multi threaded well resulting in a higher speed single core cpu performing much faster than a multi core cpu with a total higher clock speed. Eg 1x8GHz vs 4x4GHz, as per Amdahl's law.
 
Last edited:
Caporegime
Joined
18 Oct 2002
Posts
33,188
Actually a 4x4Ghz cpu is likely to be significantly faster than a 16Ghz single core in most situations, firstly you have 4 times the cache, the thread prediction/decoding/schedualling hardware. Then not every single instruction would, say they were all 4 issue cores, on a 4Ghz 4 core chip 1 thread might use 1 instruction, while 3 others would use 4 issues, the single core chip would be massively massively slower processing the single issue core taking a lot more time, and every misprediction or thread stalling and switching to a new one would take a heck of a long time.

The simple fact is, the only things that require a HUGE amount of processing power, can (almost) all benefit from multiple threads.

Quite simply since we went multicore, windows "hung" less, when you tried to do multiple things the cpu would be swapping threads like crazy and you literally would have to wait and have things stutter, with multicore chips you pretty much always have spare capacity so opening up a new program can be handled on a non loaded core and windows is just far smoother.

I can remember programs would often crash or screw up and start hogging 100% cpu power, on a single core single thread machine that often meant those irritating situations where you move the mouse and it takes a while to update its location, trying to use task manager to kill it was painful. These days you've got 3 other cores and no issues, something goes wrong, it rarely effects the usability of the computer.

Then theres the fairly simple situation that, we can't have a single 16Ghz core, and we can have a 4x4Ghz core.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
What on earth are you on about?

Chip manufacturers moved to multi core because of the inability to increase clock speeds, that is the performance of a single core, performance of single threaded applications is reduced on multi core systems and many pieces of software cannot be multi threaded well resulting in a higher speed single core cpu performing much faster than a multi core cpu with a total higher clock speed. Eg 1x8GHz vs 4x4GHz, as per Amdahl's law.

Again, not really, look at Superpi on a Core 2 architecture, 4 issue core, which is wider than their older cores, Superpi fills it, a few other benchmarks do, 99% of real world applications DO NOT fill it. Its inefficient, most instructions are broken down into VERY simple pieces, this is how it HAS to work. You could have a MUCH faster single core, that was 8 issues wide, however, you'd just be increasing the unused part of the core as your application still only averages 2 issues per clock, if its a 2 issue core, or a 85 issue core, the only difference would be increasing inefficiency.

8 2 issues cores will likely end up in most situations FAR more efficient than 4 4 issue cores.

Clock speed wasn't really the big deal, single cores really got as complex as they needed to be, the main thing required was more cores for other threads.

Almost no user in the world uses a single thread and wants 59Ghz of performance out of it, its pointless, inefficient and very difficult to do. Windows constantly does stuff in the background, at the moment I have a football stream going, firefox, IE, windows explorer, a game, a downloading application, the stream outputted to a VLC window.

Windows is smoother by having these things working on different cores on many threads, than on one core.

As has been discussed and shown in this very thread, Superpi is pointless, newer multithreaded versions are faster. Superpi, one application will be faster on the same architecture with a faster core, but on a multicore chip with better software, its much faster still.

The only reason a single core uber speed core would be better, is if you insist on clinging to decades old software that was writen for computers that ran Windows 95.
 
Last edited:
Associate
Joined
9 Jan 2006
Posts
1,375
drunkenmaster, while I appreciate the stability point of having spare cores to fall back on should one get caught in some logic loop or other, I don't think there is any chance that programmes would be slower on a massively high frequency single core compared to serveral cores with an equal speed between them.

I know enough about coding to know that if you split the work there are always processing overheads. You simpley don't end up with a net gain if you've got to first split, then rejoin any process. I'll bet there is even a physical or computer engineering law that proves that single cores will always be more efficient per clock than multiple cores at the same combined speed (someone with better knowledge here back me up). What you are describing is part of what it supposed to mitigate that necessary limitation, not reverse it.

And of course if we could have just upped clock speeds forever I dare say cache and other architectual improvements would have kept up, so some of the problems you're describing wouldn't be an issue.

But as you say, that's neither here nor there as Intel found out the hard way that you can't just keep upping the clock speeds forever.
 
Caporegime
Joined
12 Mar 2004
Posts
29,913
Location
England
Lots of applications like games wait for the result of a single thread, during this time the other cores are idle, this results in overall lower performance then having a single core running at 2x the speed which is able to run through that same algorithm in half the time, this is much more important than being able to run wmp and internet explorer at the same time with slightly better efficiency.

Take a look at Amdahl's law, which explains exactly this problem.
 
Soldato
Joined
22 Mar 2008
Posts
11,657
Location
London
Multiple threads will always have slight overheads, but then again we are nowhere near having true parallel computing as yet anyway. Almost all apps are written to use X threads, rather than being able to fill any number of threads and never blocking due to waiting for a result from a particular thread.
 
Associate
Joined
9 Jan 2006
Posts
1,375
Multiple threads will always have slight overheads, but then again we are nowhere near having true parallel computing as yet anyway. Almost all apps are written to use X threads, rather than being able to fill any number of threads and never blocking due to waiting for a result from a particular thread.

But surely if we could have upped clock speed for longer, then applications would not have been written like that. They would have been simpler as they would not have needed to also process means by which different threads refer to each other.

Amdahl's law, that looks like the one. Thanks Energize.
 
Soldato
Joined
22 Mar 2008
Posts
11,657
Location
London
Upping clock speeds and per clock performance is just too expensive and difficult now, until we hit quantum computing that is.
 
Caporegime
Joined
12 Mar 2004
Posts
29,913
Location
England
It's a real shame as while embarrassingly parallel programs like encoding run way faster with multi core cpu's, until we can increase clock speeds further a lot of applications aren't going to get any speed up. :(
 
Soldato
Joined
22 Mar 2008
Posts
11,657
Location
London
It will take for applications to be re-written differently, for programmers to get the correct skills so that they can plan programs for parallel computing, for operating system schedulers to get better, and for someone to release a very good parallel computing framework.
 
Back
Top Bottom