Best stability test tool?

I've had 2 blue screens on my new build in 2 days, doing different things. What's the best tool to determine whether this is because my OC isn't stable?

Thanks
Personally I use a combination of Prime95 Large FFTs (max heat) and HCI Memtest. Prime for at least 10hours, then Memtest (1 instance per cpu thread) for at least 12 hours, 24 is ideal. Memory issues can be tricky to catch, hence extended test time, but are often the cause of BSODs.

From what you've said, I suspect RAM/IMC rather than CPU cores. See if you can do 12hours of Memtest, 1 instance per core. Leave the system about 2GB though to prevent disk thrashing.

Eg, I have 6 threads (1090T) and 8GB, so I run 6 instances with 1024MB each.
 
Last edited:
Good recommendation for memtest. You can grab an iso or usb self installer for memtest86+ from their site, or similarly as part of UBCD.

This way it tests the entire memory - there's really no comparison for booting to memtest86+.
 
No no no. The number of GFlops is completely irrelevant.

The number of GFlops you see is just specific to that task, and won't be the theoretical maximum of the chip as the library isn't very efficient.

:confused:. Are you saying that GFlops is related to particular task in that if the task isn't optimised to take full advantage of all the computing power then we will have lower GFlops values aswell?

Afaik GFlops is the rate or speed at which the cpu processes/executes equations/instructions etc at a given speed and is related to cpu speed. So if the cpu speed is faster, GFlops will be higher. If the cpu speed is slower then GFlops will also be lower.

LinPack displays the GIGAFLOPS at the end of each pass,so we only need to have an estimate of it before we run the stress test.

Linpack uses 64 bit(double precision)floating point numbers to store the coefficient matrix etc.
It also uses SSE(2) instruction set & registers to run as fast as possible.Each SSE2 register is 128 bit wide,so we can pack 2 64 bit values in a single register.Current processors can perform a multiply-add operation on each 64 bit value in a single cycle.So it can do 2 multiplies+2 adds per cycle,which translates into 4 FLOP(floating point operations)per cycle.This is for one core.For a dualcore the value is 8 FLOP/cycle.With a quadcore it is 16.

Now let's calculate cpu performance in term of GIGAFLOPS for a few chips:

singlecore @ 2 GHz:2X4=8 GIGAFLOPS
Dualcore @ 3 GHz:3X8=24 GIGAFLOPS
quadcore @ 4 GHz:4X16=64 GIGAFLOPS
sixcore @ 4 GHz:4X24=96 GIGAFLOPS

Note that the above values are only the upper limits.The actual value we get in LinPack is somewhat lower due to the os overhead,LinPack bookkeeping & because Linpack cannot keep the CPU execution units busy all the time.

Here is the intel chart for cpu GFlops values. So if you apply the method above you will get similar figures. At the top of the page is the list of i7 ,i5 cpus:

http://www.intel.com/support/processors/sb/cs-023143.htm#3

As you already quoted:

The LINPACK Benchmarks are a measure of a system's floating point computing power

That's why we use these programs to check if we are getting lower or higher GFlops to determine the accuracy of our test because the GFlops values can vary for a cpu at same speed if testing isn't tweaked properly. For example: The theoretical maximum limit for my [email protected] is 3.4 x 16 = 54.4GFlops. Now it is impossible to reach this figure in IBT testing because as you mentioned library isn't very efficient coupled with mobo chipset, ram latencies, L1,L2, etc which can form barriers from getting close to the max limit.

However for IBT testing to be accurate the GFlops values should be 70-80% of the theoretical maximum limit and this in turn will give higher temps.

So for my [email protected], I should/usually get 43-44GFlops which is high enough. I have also ran instances of IBT where I gained 36GFlops for the same speed and temps were definitely lower by 3-4C and this can make difference to stability.

So I don't know why you are saying that number of GFlops is completely irrelevant.:confused: That's why it is advisable to re-run IBt several times by tweaking ram so you can get a better understanding of how much GFlops values you should be getting on average for a given cpu speed.:)




The more memory you use the larger N, so the more equations it solves simultaneously. In IBT it just lets you set the memory usage, but in Linx it specifies the problem size as well (1024MB=11530 equations, 2048MB=16331...)

The higher the memory the longer it takes (as it's doing more equations per loop) but also the more sensitive to errors. I've found standard to be fine, but I usually run high. If you have the time, run it at max available memory.

I agree with this and this is what I stated previously that larger the ram you use, the greater loading time there will be and cpu will be under stress for longer.
However if you double the amount of ram, GFlops won't double because it is not related to ram but is related to cpu speed as mentioned above. Here is the analogy:

Car = processor
Car Speed = GFlops/cpu speed
Amount of Ram = Length of Road

For a given car speed, longer the road, greater time it will take to travel.

However choosing custom and inputting 'Free' ram just ensures that you are making use of 'physical ram' and that data will be readily available to cpu for processing. Ofcourse choosing maximum won't change anything except increasing loading time. However even with 'maximum' sometimes GFlops can decrease. :) I would really like an input from computer scientists on this forums on this issue as I don't have computer science degree to explain greater complex stuff and can only say what I have learned by researching and experimenting :).
 
Last edited:
I forgot to post the actual article in my previous post about Linpack which I am sure some of you have seen already. However I believe for the benefit of all the fellow ocuk members I am posting it directly in this thread. If you have any questions then you should direct them to PERSPOLIS who is the original author of the article:).

He states in article that he accepts only GFlops values very close to theoretical maximum for accurate stability testing. However later on in the comments sections he agrees that it is impossible to get that close and seems to accepts 70-80% of the theoretical maximum as acceptable. I would say that my findings generally correspond with his article

http://www.overclock.net/intel-cpus/645392-how-run-linpack-stress-test-linx.html

Originally posted by PERSPOLIS

About LinPack

The LinPack stress test is an amazingly well-optimized app that solves a system of simultaneous linear equations by the Gaussian elemination method.Anyone who has programmed an equation solver or at least has studied the algorithm knows that there are lots of memory traffic involved.Without optimized dram access,the CPU would be twiddling its thumbs while the memory is struggling to catch up.Actually the chipset & memory run like hell to keep up with the CPU.Hence this test stresses CPU,L1&L2 caches,chipset & memory all at the same time.It is an excellent all-around stability test,if performed properly.

If this test is so good then why so many people complain against it?some people state that they pass this test but fail prime95 or even worse,get bsod when running normal apps like games.Others report they can pass the test one day & fail another day without any change in their settings.
The short answer:They do not run the test properly!

So...how come I can't run the test properly?

In short The background apps,unneeded os services,unnecessary paging & using the pc while the test is runnig steal so many precious CPU cycles from the test as to render it totally useless.
Virtual memory management(paging) is a prime suspect.Windows uses most of the unused ram to cache files,which is good.The problem starts when our app(LinPack)asks for a certain amount of physical ram,but the os only gives us a mixture of physical & virtual ram,even when there is enough ram to meet the demand.Let's say our system has 3200mb of free ram in win64.Windows uses a large chunk(say 2000mb)for file caching.We run LinX/IBT & start a test that needs 1800mb of ram.The OS decides to preserve the file cache,so it gives us 1100mb of physical ram & 700mb of virtual ram.This means the cpu wastes a considerable amount of time waiting for the OS to read/write data to/from hard disk,which screws the test big time.The test takes longer to compelete each pass,the core temps are lower,we observe wild fluctuation in temps & the system is not stressed enough.

But it could be even worse.Let's say you have an oc that is certified to be stable thru numerous different tests.If you use your system for a couple of hours & then try to run LinX/IBT with a problem size that say,needs 2000mb of ram,the os may only give you 500mb of physical ram & 1500mb of virtual ram.Now something weird happens:the test fails very fast(sometimes in only one pass)even with settings that are certified to be stable.I'm not sure about the reason,but it most certainly is a software bug.After all LinPack is not designed to measure hard disk traffic!Now if you force the OS to flush the cache & run the test without a reboot,you can pass the test with flying colors again.

That's why I believe a test run with 2400mb of physical ram is better than one with 3200mb of physical & virtual ram combined.

How to run LinX/IBT properly

-temporarily disable all unneeded apps running in the background
-temporarily disable the auto protect feature of your antivirus
-temporarily close the side bar in vista
-temporarily disable unneeded OS services,including superfetch/prefetch,readyboot/readyboost,windows defender,screensaver,...
NOTE:If you are not comfortable with disabling the OS services,then reboot & let your computer sit idle for 10 minutes before running the test.Do not use your pc or run any app during this period.You also need to observe all the other steps mentioned here
-Run only one temp monitoring app(hwmonitor,realtemp,coretemp,...)that works best for you
-In task manager-performance tab,check the amount of FREE PHYSICAL MEMORY & use a bit less(200-300mb less)
-In win64,with 4 gigs of ram,use 2400mb or more.In winXP 32bit uSe IBT & select 2047mb of ram
-Positively do not use the pc while the test is running

If windows is using most of your ram for caching & you want to flush the cache,run LinX/IBT & select a high amount of memory.For example if you have 4 gigs of ram,in vista64 use 3000-3200mb.Start the test & let it run for 30-40 seconds,stop the test,close LinX/IBT & check your free memory in task manager.

Another alternative is running the test in safe mode.It seems a good idea,but I have not tried it myself,so I'm not sure about its pros & cons.

...But we still need an idex to prove we are running the test properly

Even if we take all the steps mentioned above,we still need an index(call it a criterion or guideline if you like)to make sure we are running the test properly.

Lest's face it;We have seen people that calim 10 or even 5 passes is all you need to prove you are stable,but there are others that suggest 100,200 or even 500+.The only reason these people give is that the test has failed,say after 70 passes so you need 100+ iterations.In fact a test may fail due to reasons unrelated to your oc settings,but discussing the reasons needs another thread.

Without an index it's like flying in the dark,we may reach our goal,but we can never be sure.
I mean 10 passes of a good test can catch errors that 100 passes of the test run blindly cannot!

So let's try to find a good index:

-CPU usage:seems promising.We only need to make sure our CPU usage is close to 100%.Right?
Wrong.To verify,run Wprime,prime95 & LinX(or IBT)separately,while monitoring CPU usage & core temps.The cpu usage should be close to max for all 3 tests,but the temps are higher in prime95 versus wprime.LinX(or IBT) temps should be the highest of all 3,which means LinPack Stresses the system more than prime 95,which in turn is more stressfull than wprime.
So CPU usage is not acceptable as an index.

-Core temps:very system dependent.The temps could be vastly different due to case cooling,CPU heatsink,ambient temp,CPU vid,....
We need an idex that is comparable across differnt rigs.
So temp is not acceptable as an index.

-CPU performance in GIGAFLOPS:
This is the index we have been looking for.
GIGAFLOPS stands for giga(one billion)floating point operations per second.
LinPack displays the GIGAFLOPS at the end of each pass,so we only need to have an estimate of it before we run the stress test.

Linpack uses 64 bit(double precision)floating point numbers to store the coefficient matrix etc.
It also uses SSE(2) instruction set & registers to run as fast as possible.Each SSE2 register is 128 bit wide,so we can pack 2 64 bit values in a single register.Current processors can perform a multiply-add operation on each 64 bit value in a single cycle.So it can do 2 multiplies+2 adds per cycle,which translates into 4 FLOP(floating point operations)per cycle.This is for one core.For a dualcore the value is 8 FLOP/cycle.With a quadcore it is 16.

Now let's calculate cpu performance in term of GIGAFLOPS for a few chips:

singlecore @ 2 GHz:2X4=8 GIGAFLOPS
Dualcore @ 3 GHz:3X8=24 GIGAFLOPS
quadcore @ 4 GHz:4X16=64 GIGAFLOPS
sixcore @ 4 GHz:4X24=96 GIGAFLOPS

Note that the above values are only the upper limits.The actual value we get in LinPack is somewhat lower due to the os overhead,LinPack bookkeeping & because Linpack cannot keep the CPU execution units busy all the time.


Calculating & estimating CPU performance in term of GIGAFLOPS

NOTE:From now on I use win64 with 4 gigs of ram in all the following discussion unless noted otherwise.

The gigflops value as reported by LinPack is roughly constant for a given CPU at a given core clock.The impact of ram speed & FSB is very small.
This is very important.It means we can have a fairly accurate estimate of the GIGAFLOPS we should achieve even before running the stress test.This is the index I have been talking about.
For example the index for E5200 @ 3 GHz is roughly 20 GIGAFLOPS.The ram speed & FSB could make this value change from 19 to 20.5;Hence if we run LinX/IBT and only get 15 GIGAFLOPS then we are obviously performing an improper test that is not very useful.I have seen people running an E5200 or E6300 @ 4+ GHz and only getting 13 GIGAFLOPS!!That's also why you sometimes see people getting ridculously low temps while running LinX/IBT.The temp differnce between a proper & improper run of the test could be more than 20 c°.

Furthermore,we only need one GIGAFLOPS estimate per chip.We can calculate other values by proportion.Let's say for an E8400 @ 3GHz the index is 21 GIGAFLOPS.Then for a 4GHz oc we can expect a performance value of 21X4/3=28.

But all these need to be verified.To this end,I performed a number of tests on my sig rig in vista64.First I overclocked my cpu to 3 GHz(333X9)with my ram clocked at 1066 mhz.After several runs of LinX(just to make sure) I found the GIGAFLOPS for this oc.Then I kept FSB & ram speed constant & raised the multi.In each step,I report the actual GIGAFLOPS as displayed by LinX and a value that I have calculated from the base(3GHz) oc.Here are the results:


CPU clock--------FSBXMulti---RAM Speed----Actual GIGAFLOPS---Calculated GIGAFLOPS
(GHz)--------------(MHz)
__________________________________________________ _________________
3.00--------------333X9.0---------1066--------------20.3-------------------- 20.3
3.16--------------333X9.5---------1066--------------21.2-------------------- 21.4
3.33--------------333X10----------1066--------------22.2-------------------- 22.6
3.50--------------333X10.5--------1066--------------23.1-------------------- 23.7
3.66--------------333X11----------1066--------------24.0-------------------- 24.8

As you see,the calculated & actual values are very close,which proves our point.
Also note that the calculated values are higher than actual values & the delta becomes more as we oc higher.The reason is we oc the cpu(and L1 & L2 caches),but keep the memory clock constant.To see if this is really the case I ran another test:

CPU clock-----FSBXMulti-----RAM Speed---------Actual GIGAFLOPS
(GHz)-----------(MHz)
_________________________________________________
3.00----------333X9.0----1066 with optimized settings-----20.3

3.00----------333X9.0----800 with stock settings-------------19.5


Once more our point is proven.In short,the impact of ram speed is small.The calculated value could roughly overestimate the actual value by 1%-3%.

Next let's try to make the calculated & actual GIGAFLOPS equal!This is done by overclocking CPU/FSB/RAM at the same time and by the same amount.The aim is to verify our assumption!

We start by setting CPU/FSB/RAM @ 2.5/200/800.Then we oc by 20% which is CPU/FSB/RAM @ 3/240/960.Finally we oc by 33% with CPU/FSB/RAM @ 3.33/266/1064


CPU clock--------FSBXMulti---RAM Speed----Actual GIGAFLOPS---Calculated GIGAFLOPS
(GHz)--------------(MHz)
__________________________________________________ _________________
2.50-------------200X12.5----------800--------------16.24-------------------- 16.24
3.00-------------240X12.5----------960--------------19.54-------------------- 19.49
3.33-------------266X12.5---------1064--------------21.71-------------------- 21.60

The results speak for themself.

Now let's guess the expected GIGAFLOPS for my CPU @ 4 GHz:20.3X4/3=27.06
But because I'm not overclocking my ram the actual value is a bit less.by consulting the first table I estimate the actual value to be 25.8 GIGAFLOPS.

We don't even need to find out our actual base GIGAFLOPS ourselves;We can ask others.Let's say you have an E8400.You ask other(reliable)people for an estimated(or measured) GIGAFLOPS for your chip @ stock.You are given a value of 21 GIGAFLOPS.Now you want to calculate the expected value for an oc of 3.6GHz.SO 21X3.6/3=25.2.Now a good estimate for your oc should be around 24.5 GIGAFLOPS.

The following is the estimated GIGAFLOPS for a few chips:

E5200 @ 3 GHz 19-20 GIGAFLOPS
E8400 @ 3 GHz 21-22 GIGAFLOPS
E9550 @ 4 GHz 54-56 GIGAFLOPS

I5 quadcore @ 4 GHz 59-61 GIGAFLOPS
I7 quadcore @ 4 GHz 60-62 GIGAFLOPS
Gulftown 6core @ 4 GHz 90-93 GIGAFLOPS

How many passes?

I can only talk about my choice,yours may be different and I can understand that.
I suggest running the test for 30 to 50 minutes,but not less than 10 passes.

What about Win32?

Here is the good news.Almost everything I said about running the test properly applies to winxp 32 bit as well.Using IBT with 2047mb of ram & making sure I'm running the test properly,I have always been able to reproduce an error that has occured during a Linx/IBT run in vista64.
Note that in win32,linpack uses 32 bit code,but in win64 it uses 64 bit code that runs faster.Also vista64 memory management is much better than 32 bit xp.As a result the cpu performace(in GIGAFLOPS)is lower in win32.For example the GIGAFLOPS for my cpu @ 3.66GHz is almost 24 in vista64 & almost 19.5 in xp 32 bit.So non of the GIGAFLOPS values given for win64 are useful for win32.You need to work out the proper numbers yourself.

Acceptable tolerances

It depends on the accuracy of your estimated GIGAFLOPS value.Generally a value of -1% to -3% should be ok.Personally,I try to make my estimate as accurate as possible;Then I only allow a tolerance of around -1.5%.Let's say I'm expecting 23.6 GIGAFLOPS.I accept every pass with a value of 23.2+ as acceptable.Now if the total no. of passes with a value of 23.2 GIGAFLOPS or more is less than 10 the whole test is unacceptable to me.You really need to experiment & decide for yourself.


The number of calculations performed during each pass

The number of floating point operations(FLOP)performed during each pass is a function of n,where n=the no. of equations(problem size)
I have done the math.The result is a cubic polynomial: an³+bn²+cn+d where a,b,c & d are known values.It is possible to keep only the highest order term(an³)and still get very good accuracy.
Here is the result:
The number of math operations in each pass=(2/3)n³=n³/1.5 FLOP
Divide by 1e9(one billion)to convert to gigaflop.
Note that this should be equal to the product of Time X GFlops as reported by LinX for each pass.

As an example let's calculate the no. of math operation with a problem size of 10000:

10000³/1.5e9=666.7 gigaflop

Now we can calculate the time needed for each pass even before starting the test.This is the net time Linpack spends to solve the equations.It needs a little extra time to calculate & fill the arrays at the start of each pass.


Here I try to clarify a few points:

-The max gigaflops is measured like this:a few numbers are loaded into cpu registers & math operations(multyplies & adds) are performed on them billions of times.There are no L1,L2 & ram access involved.This means we get the same number for similarly clocked e5200,e7300 & e8400.Similarly,we get the same result for similarly clocked q8300,q9400 & q9550!
As you see the result(while correct)is useless as a benchmark.It only serves as a performance upper limit.
In real world apps,we can never ever get the max gigaflops.
A CPU with more cache and/or more advanced architechture helps us get closer to the max value.

-The impact of ram on CPU performance is small,but not trivial.This means if you are using very slow ram with a powerful CPU,your gigaflops performance would suffer.The performance hit could be even more with a quadcore.
Also note that if you are using a combination of Nvidia chipset+slow ram+untweaked bios settings,the memory subsystem may become a bottleneck.

-It seems this test is somewhat more optimized for INTEL processors & chipsets,so the difference between max & actual gigaflops could be a tad more for AMD processors.

-Still,the best method is to measure your basline gigaflops as I've explained in the op regardless of the max value.
 
I find that for SB, games and general usage are a better stress test than Linx or Prime95. I too had my CPU at 4,600Mhz, which would pass Prime95 blend for a couple of hours. However, it would crash when playing a 720p movie. It needed more voltage (1.288V under load) for full stability.
 
I find that for SB, games and general usage are a better stress test than Linx or Prime95. I too had my CPU at 4,600Mhz, which would pass Prime95 blend for a couple of hours. However, it would crash when playing a 720p movie. It needed more voltage (1.288V under load) for full stability.

But would it pass IBT/Linx - I doubt it.

WZero those links are good. Looking back at my logs I was running IBT at high=2048MB (with no regard for shutting other processes down) and was getting 78% of max. which lies within the writers 70%-80% guideline.

Low 80's was achievable by setting memory to 2400MB as per guide (4GB memory), but much higher was worse due to caching.

Cheers
 
A couple of hours isnt really sufficient for prime blend, ive had clocks fail after 5 hours with blend, failure on one thread, had to up the voltage to 1.293 from 1.288, still got a bit of tweaking/testing to do yet.
 
lol - why don't you guys listen to the person with the most experience here? RJKONEILL is correct - nothing beats 8 hours of prime95.

Linx and other tests are for quick dirty testing, for overall stability you have to fully load your system over a LONG course of time - i.e. 8 hours - personally thats a minimum for me (and many a hardened overclocker - just check the hardcore/extreme websites out there - they will concurr), I prefer 12 hours - 24hours is a bonus :)

Plenty of people here complaining about 2 hours of prime95 then pc crashes during gaming - lol - no wonder. Its not prime95 thats the problem - its your patience! The best things come to those who wait. Overclocking takes AAAGES to get right - its not an overnight boom and your done.

Run it in BLEND mode to stress both IMC and CPU (I'd go so far as saying this is the best test for any system) or SMALL FFT's for maximum heat from CPU. I've seen loads of sites insisting 8 hours prime95 blend before its considered 'stable' - Thats because one can easy do 6 hours then crash - 8 hours is the magic figure like RJKONEILL sez.
 
lol - why don't you guys listen to the person with the most experience here? RJKONEILL is correct - nothing beats 8 hours of prime95.

I understand what you're saying, but will still favour 20-100 loops of IBT over any priming.

Unless you can find an overclock which passes IBT/Linx but is unstable in an 8 hour prime. Challenge!
 
I think as we all agree that more ram usage in IBT means that each pass will take longer to complete hence overall test will also be longer for a given number of passes and cpu speed; I wanted to explore more on the ram side of the test which will hopefully result in better understanding of IBT usage.

Here is the Windows7 task manager. I am strictly looking at the 'Physical Memory'

wtm.png


If we click on 'Resource Monitor' and then click on 'Memory' tab, we get the following screen:

wtmresourcemonitor.png


If we place our mouse pointer on the individual heading we get the description of what that particular memory is. I have 4GB or 4096MB of ram installed, however task manager shows it as 4094MB. That's because 2MB of physical ram in my case is reserved by bios and other drivers as shown in the 'Hardware reserved' description. So total memory will always be 4094MB in my case.
Listing all the memories with their description:

Hardware Reserved: Memory that is reserved for use by the BIOS and some drivers for other peripherals

In Use: memory used by processes, drivers or the operating system

Modified: memory whose contents must be written to disk before it can be used for another purpose

Standby: memory that contains cached data and code that isn't actively in use

Free: memory that doesn't contain any valuable data and will be used first when processes, drivers or operating system need more memory

Available: Amount of memory that is immediately available for use by processes, drivers or the operating system.

Available memory = Standby memory + Free memory (Eq1)

So if you add both of them up, you will get 'Available' memory


Cached: Amount of memory that is containing cached data and code for rapid access by processes, drivers and operating system.

Cached memory = Modified memory + Standby memory (Eq2)

So if you add both of them up, you will get 'cached' memory

Total: Amount of physical memory available to operating system, device drivers and processes

Installed: Amount of physical memory installed in the computer

Apart from total, installed and hardware reserved, rest of the memories are all variable as there is interdependancy among them shown by the equations.

When you choose 'Maximum' stress level in IBT, you are always using 'Available' memory in the test as shown below.

So IBT 'Maximum' stress level = Available memory

maxibt.png



Looking at Available memory again and using the 2 equations above:

Available memory = Free memory + (Cached - Modified)

In the 'Resource Monitor' when the available memory is utilised by IBT maximum stress level, 'Free memory' and 'Modified' memories all get used up afaik. However the 'Standby' memory doesn't get used up to the point where the equation can be balanced. In other words windows 7 doesn't free up the required amount of physical memory from the standby value needed by IBT stress testing hence you can still see a large value of cache in windows task manager.
For example in the above pic:

Available = 3260MB
Free = 1870MB
Cached = 1396MB
Modified memory was = 6MB (The rest is taken by windows7, drivers and background processes i.e. 'InUse memory')

So before performing test:

Available = Free + (Cached - Modified)
3260MB = 1870MB + (1396MB - 6MB)
3260MB = 1870MB + 1390MB

However during testing I found that Cache memory went down to 386MB iirc in task manager. So that means 1396 - 386 = 1010MB of the physical memory freed by windows 7 from the cache memory was used up during IBT testing.So that means

Available = Free + (Cached - Modified)
3260MB = 1870MB + (1010MB - 6MB)
3260MB = 1870MB + 1004MB
3260MB = 2874 + X

X = 386MB. Now I suspect this amount of memory was the virtual memory that was provided by windows7 to IBT as a compensation/replacement for the leftover of cache memory that couldn't be freed by windows. That's why I think what Perspolis meant when he said that using IBT maximum stress level consists of mixture of physical and virtual memory.
If using custom stress level and inputting 'Free' physical memory or slighltly lower value, then only this memory will get used up.

Notice that you can increase the amount of 'Free' physical memory for use in IBT by running it on maximum for 20-30s as suggested by Perspolis and then stopping it. You should notice increase in 'Free' memory and thereby also an increase in 'Available' memory. You can keep repeating it until you can get the maximum 'Free' ram for your system and the cache value will be lower.

For my 4GB ram, I can get 3200MB of 'Free' physical memory which is 80% of the total physical ram installed. The rest is taken up by windows7, device drivers and background processes etc.

Available memory = Free memory + (Cached - Modified)
2990MB = 1000MB + (1500MB - 10MB) (not advisable)

or 2990MB = 2000MB + (500MB - 10MB) (recommended)

For the same amount of 'Available' memory we can have 2 different values of 'Free' and 'Cached' memory. So IBT can produce two different GFlops output figures for the same cpu at same clockspeed.

E.g

For my [email protected]

IBT maximum stress level gives 44-45GFlops; which is similar to the results obtained when using 'Free' memory. However I ran IBT maximum stress level again this morning and gained 30GFlops lol:p. My temps were 5-7C lower across the 4 cores. So it is best to rerun IBT several times to ensure that you are getting the appropriate GFlops values.

However if you are going to use IBT 'maximum' stress level then it is advisable to ensure you have as much 'Free' physical memory as possible and that your cached memory is reduced. Otherwise you could have large virtual memory playing a role in the stress testing by which you could end up with lower GFlops values and lower temperatures.

It is always best to use 'custom' stress level and input 'Free' value or lower for better stress testing as data will be readily available to cpu for processing/executing.

Guys feel free to correct me if I made any mistakes and I will very much appreciate it:). I am amazed that every day I am learning and experimenting with new things that seemed trivial before:).
 
Last edited:
lol - why don't you guys listen to the person with the most experience here? RJKONEILL is correct - nothing beats 8 hours of prime95.

Linx and other tests are for quick dirty testing, for overall stability you have to fully load your system over a LONG course of time - i.e. 8 hours - personally thats a minimum for me (and many a hardened overclocker - just check the hardcore/extreme websites out there - they will concurr), I prefer 12 hours - 24hours is a bonus :)

Plenty of people here complaining about 2 hours of prime95 then pc crashes during gaming - lol - no wonder. Its not prime95 thats the problem - its your patience! The best things come to those who wait. Overclocking takes AAAGES to get right - its not an overnight boom and your done.

Run it in BLEND mode to stress both IMC and CPU (I'd go so far as saying this is the best test for any system) or SMALL FFT's for maximum heat from CPU. I've seen loads of sites insisting 8 hours prime95 blend before its considered 'stable' - Thats because one can easy do 6 hours then crash - 8 hours is the magic figure like RJKONEILL sez.

You are partially right. I prefer IBT for cpu overclock stress testing as this will heat up cpu 7-10C more than Prime small FFTs. If you fail IBT then that means cpu isn't stable. Actually it also tests other components of the system such as northbridge and ram. So you could fail IBT if not enough voltage is supplied to either northbridge or ram.
IBT is also cyclic in nature so the loading and unloading is like pushing and pulling so it will put cpu under a lot more stress.

Prime95 blend is more for overall long term stability of the system. The loading is constant while temps are lower than IBT so it will give a good measure of your pc long term stability.

It is best to use both so you know that your pc can handle different stress programs effectively.

I understand what you're saying, but will still favour 20-100 loops of IBT over any priming.

Unless you can find an overclock which passes IBT/Linx but is unstable in an 8 hour prime. Challenge!

I agree. This is what I do for stress testing:

-50 passes of IBT for cpu overclock stress testing
-10 passes of memtest86+ (latest version) for ram stability including ram overclock
- 10 hours of Prime95 blend test for overall system stability *cpu+ ram + northbridge chipset).
- 30mins of furmark if also testing graphics card

If the pc passes those tests then I know it is stable enough:).
 
I agree. This is what I do for stress testing:

-50 passes of IBT for cpu overclock stress testing
-10 passes of memtest86+ (latest version) for ram stability including ram overclock
- 10 hours of Prime95 blend test for overall system stability *cpu+ ram + northbridge chipset).
- 30mins of furmark if also testing graphics card

If the pc passes those tests then I know it is stable enough:).

I agree, no one test is enough to verify system stability. Yours is a pretty thorough test strategy but I would use HCI MemTest rather than MemTest86+ as it's better at detecting subtle, intermittent memory errors that can be caused by bad BIOS parameters. I've had setups that could run MemTest86+ for 24hours but would fail HCI within 2-3 hours.
 
Back
Top Bottom