Best stability test tool?

Please please please don't use Folding as a stress test. If you're sending off spurious results because of unstable hardware it's a real headache for the scientists analysing the data.

Linx or Intel Burn Test on High or Max Mem for 20, 100, 500 loops depending on your requirements is the best.

I used 100 loops on High and consider that good enough for me.
 
I am confused about IBT and I think a lot of people are running it incorrectly.

So let us say you run 3.6ghz on a quad

that is 16 x 3.5 is the gigaflops you should be achieving ? so around 56 ?

So you select 4 cores and custom memory and you need to change the memory setting until you get as close to 56 GF sec as possible ? it's highly confusing.

I tend to get around 44 sec at 2000mb so look at your gigaflops per second results the next time you run maximum if its low then the test is invalid i think ? anyone know exactly which settings we are meant to be choosing ?

I see that most people think selecting maximum is right but I don't think it is you must work out your own custom memory settings
I think all the preset settings are inaccurate especially high and maximum.
 
Last edited:
It doesn't work like that with the application you are meant to select custom and choose the free memory that you have available nothing more and nothing less.

Maximum gives me invalid results I get low 20 gf per second on max and lower temps, now if i set my free memory I get 44gf sec and hit 90c.

Wingzero has kind of explained it but still leaves me a bit confuzzled tbh
Hi guys

I just came across this article yesterday on how to run Intel burn test/Linpack properly. I don't know know if you guys read it but it seems very interesting. Ofcourse re-reading the article several times including comments will hopefully make it more simpler to understand:

http://www.overclock.net/intel-cpus/645392-how-run-linpack-stress-test-linx.html

Apparently from my understanding author suggests before running the program:

-Disable C1E and EIST in bios
-Close any programs such as web browing etc including antivirus temporarily
-Go to Windows task manager and try to disable as many background running programs as you can if possible
-Look at the Free physical memory available in windows task manager
and not the available memory
-Choose the 'Custom' stress testing in IBT and input the 'Free' physical memory (preferably slightly less) for accurate testing.
-Need to look at the consistent values of GFlops (speed) during stress testing i.e the speed at which cpu is calculating those equations.

E.g Theoretically regardless of how much actual physical ram you use:

Single core processor @ 3GHz: 4x3 = 12Gflops
Dual core processor @3GHz: 8x3 = 24Gflops
Quad core processor @3Ghz: 16x3 =48Gflops
Six core processor @3Ghz: 24x3 =72Gflops

Those figures above are the theoretical maximum speeds at which cpus can perform stress test calculations at 3GHz and Gflops increase linearly as you increase cpu frequency (speed).

So a Quadcore @ 4GHz : 16x4 = 64Gflops

Here is intel info to back those Gflops values:

http://www.intel.com/support/processors/sb/cs-023143.htm#3

For example My Q6600 @2.4GHz should give me 16x2.4 =38.4Gflops in stress test.When I run IBT with custom setting and input free physical memory I get 30.2Gflops roughly which is still acceptable though I should get close to 34Gflops going by the article and this is to be expected as real test values could never reach maximum theoretical values due to L1,L2, mobo chipset,ram etc.

However what author seems to suggest is that if you use 'maximum' stress level, you will be using 'available ram' which consists of both physical and virtual memory. This would slow down your cpu and you may get 15Gflops as opposed to 30+Gflops. So with low Gflops values, your temps will be lower and your test will be invalid.

Author suggest running IBT for 30-50mins or longer than 10 passes:).

So basically your test is invalid until you get the highest giga flops per second that you should be getting ?
 
Last edited:
You select custom and put in the free ram you have not available ram

where it says "free ram" and not "available ram" under performance in task manager

I get lower temps selecting maximum than I do selecting custom and setting my free ram amount
 
One thing that a lot of people don't do with IBT is set the number of threads manually.

Yes they soon realise they have to specify the standard/high/max to get all the memory tested, but unless you specificy the number of threads you don't get 100% load on your CPU.

Don't use "Auto" threads, you aren't stressing your CPU enough with that.

So if you have a 2500K you need a minimum of 4 threads, if you have a 2600K you need a minimum of 8 threads. Then set it to the highest amount of memory it will use (for somet thats may only be High, or Very High depending on your available memory) then run it for 5 iterations, if stable run it for 10, then 20 then 50 and see how stable it is.

However, there are quite a lot of reports of SB being stressed more by Prime 95 blend than anything else, so if you are ok on IBT try Prime blend for 8 hours, again make sure you use the right number of threads.
 
I am confused about IBT and I think a lot of people are running it incorrectly.

So let us say you run 3.6ghz on a quad

that is 16 x 3.5 is the gigaflops you should be achieving ? so around 56 ?

So you select 4 cores and custom memory and you need to change the memory setting until you get as close to 56 GF sec as possible ? it's highly confusing.

I tend to get around 44 sec at 2000mb so look at your gigaflops per second results the next time you run maximum if its low then the test is invalid i think ? anyone know exactly which settings we are meant to be choosing ?

I see that most people think selecting maximum is right but I don't think it is you must work out your own custom memory settings
I think all the preset settings are inaccurate especially high and maximum.

Thats still excellent C64. Getting 44GFlops out of a maximum 56GFlops is close to 80% which is outstanding for [email protected]. As you noted I gained more or less similar results for [email protected]. In that article auther mentions it is impossible to achieve maximum GFlop value for a given cpu speed as you will be limited by mobo chipset, ram, L1,L2 cache etc. The best that you can do is achieve the highest posiible value that you can by retesting several times for greater stability:).

You select custom and put in the free ram you have not available ram

where it says "free ram" and not "available ram" under performance in task manager

I get lower temps selecting maximum than I do selecting custom and setting my free ram amount

Spot on. As you found out that maximum stress level sometimes gives low GFlops values as it makes use of both physical and virtual ram as mentioned in the article. I ran standard test and even then I was getting like 28GFlops for [email protected] lol and temps were lower.

Afaik virtual memory makes use of hard disk for storing data and is much slower than the actual physical ram. So once the physical ram is full, the virtual ram is used for storing data and in the mean time cpu is waiting for all the data to be to be stored and then passed on to it for processing. This means that Gflops values will be lower and in turn temps will aslo be lower.

So with custom setting and choosing 'free' memory, you are always ensuring that actual physical ram value is selected and that data is readily available to cpu for processing thus generating higher GFlops values and core temps:)
 
Well I'm doing better now after putting my CPU multi on 45x and leaving everything else on auto. Been gaming for a few hours (And the only BSODs I saw were in games, so that's what I'm using to test) and all okay.

Fingers crossed.
 
As above, just set it to the correct number of threads and as high memory as you care to use. You won't ever reach the peak GFlops of the chip in real life. The more runs the better - I've had chips fail after 80.
 
So it's a good tool to work out where adding MHZ isn't adding performance as well ? let us say 4ghz = 55 GFlops per second and 4.3 ghz = 56 mflops sec it's hardly worth the extra mhz ? finding the sweetspot so to speak
 
So it's a good tool to work out where adding MHZ isn't adding performance as well ? let us say 4ghz = 55 GFlops per second and 4.3 ghz = 56 mflops sec it's hardly worth the extra mhz ? finding the sweetspot so to speak

Yeah I think this could be another way of looking at it:). This can give good indication of whether any extra performance is giving substantial returns. As you noted both you and I get more or less same GFlops value for [email protected] and 3.5Ghz so there is hardly any difference between the two speeds though 3.5Ghz will be slightly faster in benchmarking.

This is what I meant when I said that for gaming there won't be much difference between [email protected] and 3.6Ghz :). Maybe just give 2/3 FPS more in game.
 
Ill have to try the free ram custom method youve posted up WingZero for IBT. Thanks for sharing.:)
 
I am not sure about free ram I just think you have to fiddle about with the amount of ram to use to achieve the maximum Gflops per second average that you should be getting so if you are seeing 20-30 on a well clocked quad then you aren't using the right setting to get maximum stress.

Like with my q6600 if I am not seeing 43-44 Gflops a second then I have entered the wrong ram amount or settings.
 
Last edited:
No no no. The number of GFlops is completely irrelevant.

The LINPACK Benchmarks are a measure of a system's floating point computing power. Introduced by Jack Dongarra, they measure how fast a computer solves a dense N by N system of linear equations Ax = b, which is a common task in engineering. The solution is obtained by Gaussian elimination with partial pivoting, with 2/3·N3 + 2·N2 floating point operations. The result is reported in millions of floating point operations per second (MFLOPS).

The more memory you use the larger N, so the more equations it solves simultaneously. In IBT it just lets you set the memory usage, but in Linx it specifies the problem size as well (1024MB=11530 equations, 2048MB=16331...)

The number of GFlops you see is just specific to that task, and won't be the theoretical maximum of the chip as the library isn't very efficient. However it is very sensitive to instability which makes it an excellent oc tool (as any tiny errors in solving the system of equations is magnified).

The higher the memory the longer it takes (as it's doing more equations per loop) but also the more sensitive to errors. I've found standard to be fine, but I usually run high. If you have the time, run it at max available memory.
 
Back
Top Bottom