CPU usage question

But as I was saying before... measuring CPU usage for a process is not an exact science. Technically whichever thread is using the CPU at a point in time already has 100% of the CPU's attention. Obviously over longer periods of time time slicing may impact that figure slightly because no single process can genuinely have 100% CPU time...

The only time a process shows up on Task Manager as using 99% is if it has been working the CPU almost flat-out for 1 second (or whatever the refresh rate of TM is set to)...

I'm not sure how else to explain it...

The more clock cycles a CPU can perform in a single second, then generally, the faster it is. (ignoring the whole P4 NetBurst vs Athlon 64 issue)

So let's say a CPU can do 1000 clock cycles in 1 second (this would be a incredibly slow CPU by the way :D)...

If a task requires 500 of those clock cycles to perform its task. Then that means 0.5 seconds of CPU time is required to complete that task.

Now let's say another CPU can do 2000 clock cycles in 1 second (still an ultra slow CPU :D)...

Now if we execute that same task (500 clock cycles) then this better CPU would complete the task in 0.25 seconds.

Surely this clearly shows that even though neither CPU was maxxed out that they still perform tasks quicker?

I'm really not sure how else this can be explained!

PS: If you have a copy of the Intel x86 manual it is actually possible to calculate how many clock cycles an set of assembly instructions needs to execute. Most instructions are 1 clock cycle... but some more complex instructions are 2, 4 or sometimes even more clock cycles. Many SSE instructions for instance can sometimes need about 30 clock cycles... A lot of the tweaks to the Intel Core 2 Duo were in optimising the number of clock cycles needed to perform these more complex instructions. As a result C2D chips can do 64-bit algebra in 1 clock cycle, 128-bit algebra in 2 clock cycles etc...
 
But dirtydog is talking about real time, and saying that the 1.5Ghz would complete the task in the same real time by dedicating twice as many of its resources (CPU cycles) to it in an identical timeframe.

But that isn't what's happening. In fact what you are effectively saying there is that the 1.5GHz CPU would increase its clock speed! That is what would be require to "dedicate twice as many resource".

Between each clock cycle there is a measurable amount of "real time"... if you were superhuman you could time this with a stopwatch for instance...

A 3GHz CPU is ticking its clock twice as fast as a 1.5GHz CPU.

That means if you have a task which needs 500 clock cycles to complete then the 3GHz CPU will complete it twice as fast in "real time". No matter which way you look at it! Because the number of clock cycles a set of instructions needs is fixed - it isn't dynamically changed or anything.

Think of it like this:

Processor ticks its clock
1st of 500 instructions is processed
Processor waits for next clock tick...

Processor ticks its clock
2nd of 500 instructions is processed
Processor waits for next clock tick...

Processor ticks its clock
3rd of 500 instructions is processed
Processor waits for next clock tick...

The "Processor waits for next clock tick..." is the critical part that defines both how fast a processor's clock speed is but also ultimately determines how quickly the CPU iterates through a set of instructions.

:)
 
So let's say a CPU can do 1000 clock cycles in 1 second (this would be a incredibly slow CPU by the way :D)...

If a task requires 500 of those clock cycles to perform its task. Then that means 0.5 seconds of CPU time is required to complete that task.

Now let's say another CPU can do 2000 clock cycles in 1 second (still an ultra slow CPU :D)...

Now if we execute that same task (500 clock cycles) then this better CPU would complete the task in 0.25 seconds.

Surely this clearly shows that even though neither CPU was maxxed out that they still perform tasks quicker?

I'm really not sure how else this can be explained!

But both those CPUs were maxed out. The first was maxed out for 0.5 seconds and the second was maxed out for 0.25 seconds. I'm not disputing that in this scenario the faster CPU would complete the task more quickly.

Say I'm playing a game which is GPU-limited, and thus only uses 25% of my CPU. In this case there isn't any set time to 'complete' the task, because the CPU will run at about 25% continuously for as long as I decide to play the game. A quarter of my CPU cycles are being used to process the game. For 75% of the time the CPU has nothing else to do except wait for the graphics card to catch up with it. The CPU has to do something with those cycles, so they come under what Windows calls 'system idle process'.

Then say I switch to a CPU that's half as fast. The game's still GPU limited and uses 50% of my CPU. In every second, the same number of CPU cycles are given to the game in both cases. But many fewer cycles are left running the idle process.
 
Then say I switch to a CPU that's half as fast. The game's still GPU limited and uses 50% of my CPU. In every second, the same number of CPU cycles are given to the game in both cases. But many fewer cycles are left running the idle process.

But that's per second. The gpu might need something calculating before it procedures. this only if its only using the cpu 25% of *** time, the faster cpu will still move thing along faster.
In games it won't make such a difference in that situation. But within in windows where gfx is low it will.

You need to start thinking how many clock cycles per second there is, not percentage used.

At 50% usages the cpu wont process a thread one cycle then go "hang on I'm only needed 50% of the time, i better be lazy this clock cycle". It'll bang one process straight after the other until there's no threads left.
 
But both those CPUs were maxed out. The first was maxed out for 0.5 seconds and the second was maxed out for 0.25 seconds. I'm not disputing that in this scenario the faster CPU would complete the task more quickly.

Say I'm playing a game which is GPU-limited, and thus only uses 25% of my CPU. In this case there isn't any set time to 'complete' the task, because the CPU will run at about 25% continuously for as long as I decide to play the game. A quarter of my CPU cycles are being used to process the game. For 75% of the time the CPU has nothing else to do except wait for the graphics card to catch up with it. The CPU has to do something with those cycles, so they come under what Windows calls 'system idle process'.

Then say I switch to a CPU that's half as fast. The game's still GPU limited and uses 50% of my CPU. In every second, the same number of CPU cycles are given to the game in both cases. But many fewer cycles are left running the idle process.

As I keep saying Task Manager's "percentages" aren't an exact science. They are only provided to give user friendly representation. As I said before, technically, which ever thread is executing of a processor has got 100% of its time... a processor can only do one thing at once. Therefore whichever thread is on it has its undivided attention. But that is at the nanosecond scale. On a human scale of seconds things are very different.

And this is what TM does... to calculate the CPU % for each process it asks the kernel's scheduler how much processor time it has given to the process so far and divides that by the computer's uptime multiplied by how many processors/cores the computer has.

It also complicates things by throwing the system idle process into the mix. This is a variable in the discussion that we don't need and doesn't actually add anything either. This is quite a technical subject and it helps to keep the number of variables down and the simplicity up to help people understand. Unfortunately it is also one of "those things" that generally people will either "get" or "not get" the first time around.

Games are a very bad example because unlike most software they have very complicated timing algorithms which make them execute at a set speed regardless of how fast the processor is. This is why a DOS game from the 90's still plays fine on a modern PC and isn't running in ultra speed.
 
As I keep saying Task Manager's "percentages" aren't an exact science. They are only provided to give user friendly representation. As I said before, technically, which ever thread is executing of a processor has got 100% of its time... a processor can only do one thing at once. Therefore whichever thread is on it has its undivided attention. But that is at the nanosecond scale. On a human scale of seconds things are very different.

And this is what TM does... to calculate the CPU % for each process it asks the kernel's scheduler how much processor time it has given to the process so far and divides that by the computer's uptime multiplied by how many processors/cores the computer has.

It also complicates things by throwing the system idle process into the mix. This is a variable in the discussion that we don't need and doesn't actually add anything either. This is quite a technical subject and it helps to keep the number of variables down and the simplicity up to help people understand. Unfortunately it is also one of "those things" that generally people will either "get" or "not get" the first time around.
Thanks for taking the time to explain about this, in this thread - I have a much better understanding of it now.

Games are a very bad example because unlike most software they have very complicated timing algorithms which make them execute at a set speed regardless of how fast the processor is. This is why a DOS game from the 90's still plays fine on a modern PC and isn't running in ultra speed.
Not sure you're entirely right about DOS games there though? I remember running a few old DOS games - one of the very earliest Test Drive games springs to mind, I think - and it ran at insane speeds even on a Pentium 133 - totally unplayable. AFAIK a lot of old DOS games are like that. I can't swear they weren't pre-90s though, if that makes a difference.

In fact there are apps for modern machines, specifically designed to slow your machine down to make old DOS games playable, eg. Mo' Slo and CpuKiller.
 
OK so there are a few old games that didn't have stable timing code :p Most noticable on first-gen multi-processor/multi core systems... as these had bugged ACPI code which gave the HAL false readings and messed up old games with bugged timing code.

But if you play say C&C Tiberian Dawn it will run at exactly the same pace as it did back in 1995.

It all depends on the game. Some are better than others.
 
Back
Top Bottom