CPU usage question

Permabanned
Joined
18 Oct 2002
Posts
47,396
Location
Essex
This might be considered a hardware question but it is specifically related to behaviour within Windows XP so I hope the mods will leave it here.

If you perform a quick task in Windows which (according to Task Manager) takes less than 100% CPU usage to accomplish, does this mean that a more powerful CPU would not actually accomplish it any quicker? Your existing CPU has accomplished the task asked of it without needing to use all of its available CPU cycles, so a faster CPU would have no effect. Is that right? This is probably a silly question but I'm not entirely sure :)

The sort of tasks I mean are, for example, opening a window, navigating around a web page or forum (setting aside the network speed factors) - things that take little time to complete.
 
A faster CPU will execute programs faster even if they aren't maxxing out the CPU.

Think about it in the most basic terms... if it takes 2 clock cycles to do 1+1=2 as opposed to another CPU which takes 1 clock cycle to do 1+1=2 then the program that run that instruction is going to get its result quicker. Which means it can move on to doing its next instruction...

It's very difficult accurately measuring how much CPU time a process is using. It is actually not an exact science... Task Manager or any process monitor is only providing a snapshot from a period of time. Usually a period of 1 second - depends on the refresh rate of the program. Basically the kernel keeps a count of the total amount of time slices it has given a thread and from that TM can derive a more useful % statistic by dividing it by the machine's total up time. But there is certain inaccuracies - at least there was in pre-Vista. Whereby if a thread is pre-empted by another (as in, its time slice is cut short) then the count starts losing accuracy because the pre-Vista kernel's didn't know how to work out how much it was cut short by...
 
I tried a simple task, running my CPU at default clocks and opening 'My Computer' it uses around 16-20%. If I underclock by 50% it took 37-40% CPU. Theoretically even though the slower CPU is having to work 'twice as hard' if you will, it is still accomplishing it well within its capabilities and seemingly instantly in both cases. I appreciate it isn't *quite* that simple but I think the general principle is correct?

So in theory, a processor 2x the speed of mine would simply require around 8-10% CPU usage to do the same task.
 
Yes that's right. A task that uses say 20% CPU would only use 10% if the CPU was exactly double the speed.

But technically, no matter how irrelevant or indistinquishable the performance gain is to the user, the CPU that is double the speed would have also accomplished the task in half the time.

Obviously a task that didn't require 100% CPU in the first place aren't really thought of as being the main beneficiaries of a CPU upgrade - but it still helps them out regardless.

In IIS v6+ you can set limits on how much CPU time an application pool can use. Let's assume there is a heavy process that pins that pool at 20% for long periods of time. So if you had two servers, one with a CPU exactly double the speed... the server with the faster CPU would be processing that application pool at twice the speed.
 
ANother way of thinking about it,

Is if a cpu does 20 clock cycles a second and an application takes 10 cycles to complete atask, then it would take 1/2 a second to complete.

You then have the same application run on a cpu with a clock cycle of 20 per second it now only takes the cpu quarter of a second to complete.
 
Last edited:
Yes that's right. A task that uses say 20% CPU would only use 10% if the CPU was exactly double the speed.

But technically, no matter how irrelevant or indistinquishable the performance gain is to the user, the CPU that is double the speed would have also accomplished the task in half the time.

Obviously a task that didn't require 100% CPU in the first place aren't really thought of as being the main beneficiaries of a CPU upgrade - but it still helps them out regardless.

In IIS v6+ you can set limits on how much CPU time an application pool can use. Let's assume there is a heavy process that pins that pool at 20% for long periods of time. So if you had two servers, one with a CPU exactly double the speed... the server with the faster CPU would be processing that application pool at twice the speed.

Hmm - re your last paragraph, say you had one CPU at 1.5GHz and one at 3GHz (assume for the sake of argument, the 3GHz machine is exactly 2x the speed of the 1.5GHz machine) - to test the thesis you would need to give the 1.5GHz machine 40% and the 3.0GHz machine 20%. To my mind, the time taken to complete the task would be roughly the same. You are saying that the 3GHz machine would still complete it twice as fast? Are you sure about that?
 
Hmm - re your last paragraph, say you had one CPU at 1.5GHz and one at 3GHz (assume for the sake of argument, the 3GHz machine is exactly 2x the speed of the 1.5GHz machine) - to test the thesis you would need to give the 1.5GHz machine 40% and the 3.0GHz machine 20%. To my mind, the time taken to complete the task would be roughly the same. You are saying that the 3GHz machine would still complete it twice as fast? Are you sure about that?

yes.

the 4Ghz is doing twice as many clock cycles per second. Thus given an application that does needs x number of clock cycles. The 3Ghz one will do it twice as fast.
 
Ah I see what you're saying. The reason I'm asking this, apart from finding it interesting for its own sake, is because I was wondering if a faster system than mine would be noticeably quicker for general Windows usage, stuff that doesn't take much CPU % now. I was doing some stuff I normally do, with task manager open, and noticed how it wasn't taxing the CPU very heavily. So what you're saying is that even with this being the case, a more powerful CPU would still be snappier than my existing one - cool :)

It would be interesting to see some benchmarks to prove this though - shame you can't limit CPU usage in Windows for any app of your choosing.
 
It should run faster how much of a noticeable difference it makes I have no idea.

Then you have the whole instruction set debate and thus what chip to get. Although Intel is certainly winning at the moment.
 
Right I did the tests. So doesn't this prove what I said was correct?

SuperPI, 1M test

P4 @ 3.0, limited to 25% CPU usage = 211 seconds
P4 @ 1.5, limited to 50% CPU usage = 202 seconds

Allowing for margin of error in the SuperPI test and the CPU limiting program etc. they took basically the same amount of time to complete.
 
Right I did the tests. So doesn't this prove what I said was correct?

SuperPI, 1M test

P4 @ 3.0, limited to 25% CPU usage = 211 seconds
P4 @ 1.5, limited to 50% CPU usage = 202 seconds

Allowing for margin of error in the SuperPI test and the CPU limiting program etc. they took basically the same amount of time to complete.

why have you limited the cpu?

a program if needed will use 100% cpu. The reason it doesn't show that in manager is because most processes don't need 100% of the cpu for more than a fraction of a second.
What you want to do is run super pi at 100% on the 3ghz and 1.5ghz and that shows you the speed difference.

Both cpus are running at 750MHz hence the similar time.
 
Last edited:
why have you limited the cpu?
Have you read the thread?

a program if needed will use 100% cpu. The reason it doesn't show that in manager is because most processes don't need 100% of the cpu for more than a fraction of a second.
I know. This whole thread is about ascertaining how long it takes a CPU to carry out a task when it DOESN'T need to use all of its available performance to achieve it. Such as opening a window, or browsing a website or forum. Navigating around this forum causes CPU usage to pop up and down, but it goes nowhere near 100.

What you want to do is run super pi at 100% on the 3ghz and 1.5ghz and that shows you the speed difference.
But running at 100% wouldn't prove anything with regards to the thread question. It is obvious a 3 gig CPU at full load will be twice as fast as a 1.5 at full load. That isn't in question and running benchmarks to prove that is hardly necessary.
 
But running at 100% wouldn't prove anything with regards to the thread question. It is obvious a 3 gig CPU at full load will be twice as fast as a 1.5 at full load. That isn't in question and running benchmarks to prove that is hardly necessary.

At 50% load it wont execute a thread then miss a clock cycle then excute another thread.

It will execute the threads one after the other.
So using the cpu for say 1/2 a second. Then the task manager has a refresh rate of 1 second.

So over 1 second the cpu has a load of 50%. But if it refreshed at 1/2 a second intervals. The cpu would have had a 100% load and thus executed it twice as fast.
 
Right I did the tests. So doesn't this prove what I said was correct?

SuperPI, 1M test

P4 @ 3.0, limited to 25% CPU usage = 211 seconds
P4 @ 1.5, limited to 50% CPU usage = 202 seconds

Allowing for margin of error in the SuperPI test and the CPU limiting program etc. they took basically the same amount of time to complete.

Eh? Those results prove what we said is true :) The reason they came out almost identical is because as well as reducing the CPU clock to 1.5GHz you also simultaneously increased the allowable CPU usage to 50%... that just counteracted the reduced clock speed...

What you should have done is set it to 1.5GHz but don't change the 25%. This way you will see that even though the CPU isn't maxxed out that the task still executes faster on the higher clocked CPU. Which I wrote in the first sentence of my first post in this thread: "A faster CPU will execute programs faster even if they aren't maxxing out the CPU." :)
 
Eh? Those results prove what we said is true :) The reason they came out almost identical is because as well as reducing the CPU clock to 1.5GHz you also simultaneously increased the allowable CPU usage to 50%... that just counteracted the reduced clock speed...

But if I've understood correctly, this is exactly the point of dirtydog's argument. Assume that he has a 3Ghz CPU. He's saying that if he does a task which requires 20% CPU (and he's not doing anything else which uses his processor significantly), then that task would take the same amount of real time to complete on a 1.5GHz CPU, with the difference being that 40% of the CPU would be used instead of 20% (assuming the same efficiency). The reason that this task hypothetically only uses 20/40% CPU is that it's limited by some other factor on the system.

Everyone else is arguing that the 3Ghz CPU would complete the task in half the CPU time of the 1.5Ghz, which is obviously true, because it does twice as many CPU cycles in the same time. But dirtydog is talking about real time, and saying that the 1.5Ghz would complete the task in the same real time by dedicating twice as many of its resources (CPU cycles) to it in an identical timeframe. His point is that a 1.5Ghz CPU at 50% usage is doing the same amount of work in the same time as a 3Ghz CPU at 25% usage. In other words, both CPUs are providing the task with the same number of CPU cycles per second (in this case, 0.75Ghz of CPU cycles) - the 3Ghz CPU just has more unused cycles. Thus, if you never actually max out the slower CPU (probably because the speed of the task is limited in some other way), the faster CPU will not run the task any faster. Basically, this is an elaborate way of discussing an operation which is not CPU-limited and may not therefore benefit from a faster CPU. Right, DD?

I think that what DD is saying is true, providing that the slower CPU never hits 100%, i.e. that the task is never limited by CPU speed. But as AcidHell says, the task manager figures are averaged over a second or so. A second is ages for a CPU, and it's highly likely that the slower CPU would have hit 100% for some time in his My Computer test, despite only averaging 37-40%. When the slower CPU is at 100%, the quicker CPU can obviously complete the task in a shorter real time, since at this point real time = CPU time on the slower CPU, and the faster CPU simply has more CPU cycles available to give to the task. Or, looking at it in reverse, the task can utilise a greater number of CPU cycles in its timeframe than the slower CPU can provide. This is the key point. If this is not the case, then the faster CPU would not complete the task any more quickly. An obvious example of this would be a GPU-limited game. DD's example would work if the speed at which his PC opens My Computer is limited by the hard drive and not the CPU, whether it's at 1.5Ghz or 3Ghz.
 
Last edited:
Back
Top Bottom