OCUK Official IC Diamond/ Perihelion Test Results

imo you need the cpu to be stressed the same as original test, not more or less gflops, but the same

JCol you are even more stricter than I am lol:p:D. I think a range of 1-2GFlops will be ok for practical purposes as IBt tends to give slight variation in GFlops values even if your tests are highly accurate.

Psst I can't get exact GFlops values either lol:D
 
Just in the process of beginning my tests aswell. Now where is that thermometer gone?

What other devices can I use to measure ambient/room temperature if I don't have thermometer? Does Boots/specsavers sell them cheap lol?:D
 
JCol you are even more stricter than I am lol:p:D. I think a range of 1-2GFlops will be ok for practical purposes as IBt tends to give slight variation in GFlops values even if your tests are highly accurate.

Psst I can't get exact GFlops values either lol:D

LOL, its just if i get them the same (gflops) then i can say for sure if this TIM is cooler.

i also reckon i didnt apply the TIM properly the first time.
does this look ok?
02032011087.jpg


i have reapplied the TIM and now just giving it time to cure
 
Basically when you have more or less similar GFlops values in IBT/LinX for both TIMs, you are ensuring that cpu is processing/executing Guassian Elimination equations at similar rate.
(Any one who has done maths/engineering etc will know what Guassian elimination equations look like which are algebraic simultaneous equations solved by making use of matrices)

Lower GFlops values mean that cpu is processing/executing at a slower rate so the coretemps will also be lower. Likewise higher GFlops values mean cpu is processing/executing at faster rate hence higher coretemps. You can even ask fellow ocuker C64 and he will confirm it. TIMs won't make much difference whether they are good or not because you are getting different GFlops values in different tests and your TIM results will be inconclusive as the dominant baseline factor is the cpu execution rate which will overshadow the effectiveness of TIMs.

So with similar GFlops in all tests you are ensuring cpu is performing at similar rate. Since this was the major variable which has been now controlled, then you can proceed to make note of temps. Then you can see which TIM is helping to reduce temps.

For example Invalid Test

MX-3 or any other TIM: 33-35 GFlops in IBT/LinX
Average Coretemp = 58C

IC Diamond: 40-42GFlops in IBT/LinX
Average Coretemp = 66C

Valid Test

MX-3 or any other TIM: 40-42 GFlops in IBT/LinX
Average Coretemp = 68C

IC Diamond: 40-42GFlops in IBT/LinX
Average Coretemp = 66C

As you can see that in the invalid test your GFlops were different for both TIMs and hence there was also a large difference between coretemp values. So any person not familiar with the LinPack (IBT/LinX) tests would have assumed that MX-3 gave a reduction of 8C temp on average compared to IC Diamond totally ignoring the GFlops values which actually played the major role.

However in the valid test with similar range of GFlops values the results were much closer for both TIMs and hence consistency was achieved. Not saying that IC Diamond is better than MX-3 or vice versa as this is what we trying to prove anyway on this thread :p but you do get an idea.

It's sort of like with prime95 if you use IC Diamond for blend test and MX-3 etc for small FFTs test and then compare the results, they will be invalid since you are using two different test for two different TIMs. So if you use small FFts for both TIMs then you can compare temps results which will be valid for comparison purposes.

The only difference is that major variable factor in prime95 is the nature of the test whereas in Intel Burn Test/LinX it is the GFlops values along with amount of RAM used which should also be same for all test to ensure the cpu is executing same amount of equations and hence the loading time is similar:).

Check my earlier post in this thread. The first compound looks like it caused thermal throttling which produced the massive gflop difference. The 2nd test same setup produced high gflop sustained reading. Hence why I said the first must have been thermal throttling.
 
Check my earlier post in this thread. The first compound looks like it caused thermal throttling which produced the massive gflop difference. The 2nd test same setup produced high gflop sustained reading. Hence why I said the first must have been thermal throttling.


Not necessarily throttling, windows background tasks can cause large variances in the GFlops achieved.
Windows search and windows indexing services are a sod for this.
 
I'm not sold on the pea application method. The cross method seems to produce more consistent results because you can better gauge the amount of compound your applying.
 
I'm not sold on the pea application method. The cross method seems to produce more consistent results because you can better gauge the amount of compound your applying.

Exactly, IMO If you havent got a excessive mount force then I dont think the pea sized blob and clamp it down works very well either.

The method I used was, a smallish blob in the middle, press the block/sink down and with circular motion until I felt the surfaces start to meet. Then carefully clamp it down. this method yielded about 1c better than the suggested method
 
I didn't think thermal throttling on i7 (bloomfield) cpu's occured until after 100c to bring it temps back into spec?
 
Check my earlier post in this thread. The first compound looks like it caused thermal throttling which produced the massive gflop difference. The 2nd test same setup produced high gflop sustained reading. Hence why I said the first must have been thermal throttling.

Not necessarily throttling, windows background tasks can cause large variances in the GFlops achieved.
Windows search and windows indexing services are a sod for this.

Mick is correct. Background processes can have an impact on the GFlops values as they also require cpu processing power in order to carry on running in the background. That's why it is recommended to disable as many background processes as you can to free up more cpu computational power for just running LinPack (IBT/LinX).

Thermal Throttling is not caused by TIM nor it is a function of TIM. The GFlops values are related to your cpu speed and is an indication of how fast or slow your cpu processes/executes equations/instructions etc compared to it's theoretical maximum at a given speed.

For example

[email protected]: 16 x 3.4GHz = 54.4GFlops Theoretical maximum value

In IBT I usually get 43-45GFlops which is close to 80% of the maximum. This is another dimension to LinPack in that if you want to be IBT/LinX stable you have to have atleast 70-80% of the theoretical maximum, otherwise your cpu won't be called IBT/LinX stable.

You could run 100 passes of LinX and pass it if you get low GFlops values of say 20-25GFlops for [email protected]. However if you ran it again by tweaking memory settings and achieved say 44GFlops which I usually get at this speed, you may find that LinX crashes after just one pass. This would mean that your cpu requires more vcore to be stable at faster execution speed:).

Although GFlops values aren't related to the amount of ram you use, you have to bear in mind that maximum amount of usage doesn't always equate to high GFlops values as virtual memory can also play a part during your test and cause your cpu to slow down hence lower GFlops. This would happen when the cache data isn't terminated by windows.

Since you have some free memory, some taken by cache data, some by operating system, some by background processes and the rest by device drivers, this means if the amount of memory you require in the test is greater than free amount and say windows preserves the cache data, then the only alternative would be to introduce virtual memory as a compensation for cache data memory so your GFlops values may end up being lower hence the thermal throttling you mentioned.

I would suggest you read through some of my posts especially post 63,64 and 71 in the last page of this thread and also the actual article on LinPack by the original author Perspolis which I also posted in the thread for gaining better understanding of LinPack(IBT/LinX):).

http://forums.overclockers.co.uk/showthread.php?p=18449267#post18449267

http://www.overclock.net/intel-cpus/645392-how-run-linpack-stress-test-linx.html


Edit: Thermal throttling actually occurs when cpu coretemps go past a safety limit afaik and as W3bbo mentioned and is a hardware based function .
:)
 
Last edited:
Check my earlier post in this thread. The first compound looks like it caused thermal throttling which produced the massive gflop difference. The 2nd test same setup produced high gflop sustained reading. Hence why I said the first must have been thermal throttling.

May I ask what were your GFlops values in both tests?
 
Received mine today, thank you very much IC.

Will test tomorrow when I have time. I have a question though; I'm currently on stock cooler, so will that provide enough pressure for the Diamond or Perihelion to work optimally?
 
Q9450 @ 3.4ghz
Cooler contact 29
p35 ds3r
antec 300

Windows 7 x64
Pre-IC7 Compound:thermaltake (supplied with cooler)
Ambient Temp:26c
Idle Temp:40c
Load Temp: 66c

IC diamond
Ambient temp 27c
Idle Temp 39c
Load temp 63c

looking good :) will try again tommow once its had time to cure off as this was taking straight after installation (well 1/2 hour or so
 
Your compound pattern is fine as the centre of cpu IHS is well covered. As IC Diamond mentioned apply a bit more and it should cover the IHS even more:). I don't even get this pattern as the base of my heatsink is convex.

Using too little is a concern for air entrapment affecting performance and reliability

http://innovationcooling.com/applicationinstructions.htm


You cannot use to much paste, we have tested this and the excess just squishes out. You could even immerse the mounting set up is a bucket of compound and get the same result as the clamping force far exceeds any local ambient pressures or fluid spreading drag or resistance.

Better to use a little more on the first mount to eliminate the potential of a failed mount, the TIM waste associated with a rework and the time expensed doing so.


How I determined the cure time on ICD was to take two glass slides and a lapped synthetic IHS and clamp them together @ 50lbs then measure the thickness unloaded and loaded with compound with a micrometer.

Then I time the spread or time for the joint to resolve down to the average particle size at room temperature which takes about 2 hours hence we spec to 2 hours in operation before taking a reading.

2 hrs @ 50 PSI will get you 95% overnight 98% maybe at most a half degree more.

@ 70 PSI probably 1/2 hour

@ 20 PSI maybe a month or perhaps never.

Just speculating that might be one of the reasons AS5 is recommended 200hrs cure time is to cover worse case scenarios on reaching optimum BLT.
 
I'm not sold on the pea application method. The cross method seems to produce more consistent results because you can better gauge the amount of compound your applying.

X method works great and entrains no air and final results are same as applied bead. My preference towards the bead on center is more of economy for the end user as people tend to use more with X applications. But that might be my bias as I have infinite amounts at my disposal and always slap on enough without worry that I need to ration it.

I did not think it necessary as it seems straight forward to me but we could put a window on the label and spec something like 0.15-0.2 ml per 30 X30mm
 
Received mine today, thank you very much IC.

Will test tomorrow when I have time. I have a question though; I'm currently on stock cooler, so will that provide enough pressure for the Diamond or Perihelion to work optimally?

Some stock coolers are are great - Give it a shot
 
As long as we are into the nuances of mounting and application - This is something that came out of a Tech support question in the last two weeks.

"with many mounting systems it is required that you lock down one side then the other side and when doing so causes the compound to flow from the closing gap to the unloaded side"

Some number of paste impressions I have looked at in the past I attributed to the sink being unbalanced on mount, but after looking into this some number of those may be the mounting procedure.


I had to test this with some slides and if you are a few seconds fast with it the compound will backfill and give good coverage with bead on center method.

Now if you are slow in mounting it back fills less and might leave as much as 20 - 30-% on the initial load side bare.

Now a corner to corner "X" pattern after a couple of tries left some air in the mount.

A Cross + Pattern side to side middle worked great

A "V" pattern with the apex on the initial load side worked great.

Solutions I have not tried are locating the bead off center towards the initial load side to compensate. Probably the simplest would be to press down on the sink for a few moments to spread the compound some before locking down.

not a major point just one of those details that you encounter from time to time.
 
Back
Top Bottom