Overclocked 5Ghz Sandy Bridge datacentre build out...

Cheers - ended up ordering some Corsair Dominator GT 4GB (2x2GB) PC3-17066 (2133Mhz) - 1T is fine.

An interesting project, considering my main interest usually is networks. Will let you know the outcome of it - as a test lab I've got the following:-

Intel Core i7-2600k Sandy Bridge processor
Asus Maximus IV Extreme Intel P67 motherboard
Corsair Dominator GT 4GB (2x2GB) PC3-17066 (2133Mhz) RAM Kit
Corsair Gold AX850 High Performance 850W power supply
Noctua NH-D14 processor cooler
IC Diamond 24-Carat Thermal Compound

Let's see how fast/stable I can get it eh? :)
 
sorry, forgive me for sounding a bit off here,

shouldnt you be looking at enterprise hardware for mission critical stuff like this?
 
We'd love to, but to put it simply, an overclocked SB chip will run a single threaded application faster than anything else at the moment, Xeon-based enterprise hardware included.
 
not sure if this has already been highlighted but:

1. you do not want overclocking in a server based environment (particularly a service industry environment). This would be due to increased likelyhood of failure etc... Dealing with customers data... Enough said. However its your company so you could overclock if you wanted...
2. Watercooling + server farm = terrible idea. No server farm is going to let you rack when your running watercooling.

From a cost point of view SB is cheap for clockspeed, even at stock clocks... Also its IPC is better than pretty much all current cpus... so you could use your poorly optimized software running on lots of machines, and actually save money.
eg (SB 2500 + p67 board) x 3 is probably the same price as one of your xeon rigs...
 
Thanks lollyhayes... appreciate the points... the watercooled idea was something we were considering, but only within the context of something like an H70 "hybrid" cooler - we'd definitely not be going full hydro as per your note.

Also, your first point is valid, however throughout the rest of the thread there's lots of references as to why we do want to run an overclocked environment...
 
perhaps getting your C++ devs to make use of the AVX instruction set as well in your apps...
this would be the only reason that SB would be any better than any other enterprise platform.

edit - since grabbing win 7 SP1 and running linx the GFlops have gone from 67 - 135GFlops @ 4.6 ghz.

I think that this would be applicable to linux, as far as I'm aware provisions for AVX have existed since 2009? What distro are you running by the way? Fedora?
 
Last edited:
I'm with Damo on this. Sometimes it's just not possible to utilise multithreading. Some apps just require pure grunt on a single thread. It's not that the coders are lazy. Some procedures just don't work well if split into parallel chunks.

A basic example is this:

x = 0
while x < 1000
y = x * 2
z = y + (readlivecsvdata)
x++
end while

Multithreading is only useful if you have many procedures all pulling / pushing data. If you want a procedure to have maximum possible throughput single threading it is.

This is why I ran a phenom with 3 cores disabled. Some said that was stupid but it is the best way to get max clock for the single thread apps.

Enterprise is only needed when you need total reliability so that end users see little downtime. For research / number crunching you don't have to get ripped off. Knock up something that may, or may not last a few months at maximum speed.

I'm going to try a 2500K too. I have gone for the H50 but replacing the stock corsair fan with 2 others in push pull.
 
I'm with Damo on this. Sometimes it's just not possible to utilise multithreading. Some apps just require pure grunt on a single thread. It's not that the coders are lazy. Some procedures just don't work well if split into parallel chunks.

Exactly the point. As mentioned, we measure latency in nanoseconds for reaction times - multithreading takes valuable clock cycles to achieve, and is therefore useless to us.
 
I know this has already been done to death but just for my own interest:

Is loss of reliability the only potential downside of overclocking here?

I thought that overclocking also increases the likelihood of the cpu returning wrong answers to computations, which would screw up the results of this processing. Is that right?

Also let us know how this goes. I will be really interested to see how well the clocking goes.
 
I can neither confirm nor deny. Care to mention a name?

So cryptic! I went to a careers fair and one of the companies I was talking (the one I mentioned earlier, mainly financial services) were talking about how the measure response time as the most important factor etc... and sound very like what your describing.

They said they mainly used C# though I think, they're called Gloucester research Ltd
 
Enterprise is only needed when you need total reliability so that end users see little downtime. For research / number crunching you don't have to get ripped off. Knock up something that may, or may not last a few months at maximum speed.

I'm going to try a 2500K too. I have gone for the H50 but replacing the stock corsair fan with 2 others in push pull.

He said it was handling important financial data - one area where you really don't want the increased chance of data corruption and various numeric imprecisions that can happen with home consumer CPUs when overclocked - just because it passes priming doesn't mean you've necessarily got a stable overclock or that it will continue to be stable, there may still be some operations that the CPU will slightly miscalculate.

Theres a reason why large corporations especially those in financial areas typically use high end IBM solutions for critical data and why this is a really terrible idea if the data hes crunching is in any way important.
 
He said it was handling important financial data - one area where you really don't want the increased chance of data corruption and various numeric imprecisions that can happen with home consumer CPUs when overclocked - just because it passes priming doesn't mean you've necessarily got a stable overclock or that it will continue to be stable, there may still be some operations that the CPU will slightly miscalculate.

Theres a reason why large corporations especially those in financial areas typically use high end IBM solutions for critical data and why this is a really terrible idea if the data hes crunching is in any way important.

If it blows up, we lose the possibility of making money, we don't actually *lose* money.

Large corporations are exactly that; large, lumbering corporations. I'd love to divulge more, but I really can't - needless to say we do OK when it comes to making money out of our business model. ;)
 
I'm not sure what your doing... but if something was returning say 1.0774 from a calculation due to the overclock instead of 1.0775 as it should, which can and does happen, it could potentially have massive implications for whatever your crunching, probably not so important if the data your crunching is used to make money but if the data your crunching is directly financial it could have massive implications.
 
Lets just take it as read that Damo and the company he works for are well aware of the technical risks and deal with the topic at hand.

Damo, if you really want to find stable O/C in the 5ghz region you need to be looking at CPU batches and see if you can order ones that match from your supplier. Most of the mid/high end P67 motherboards will handle high overclocks the same, with Sandybridge its really down to finding golden sample batches for High ghz and low voltage.
 
I don't agree that something would return a different figure just because it was overclocked. If there was data corruption from an overclock there would be something more catastrophic than a wrong figure computed. I'm not saying it can not happen, and I am sure some will give an example, but if you run your car at 10,000rpm continually it is more likely the cam belt will snap, or piston rings blow, rather than the wrong channel on the radio being played.

---

If I could take a guess the boxes are to be used for some kind of arbitrage or hedging. Bots scanning for price changes, previous data checked, an optimum buy / sell / back / lay price calculated, transaction requested on a third party site ;)

This would not need enterprise. You need the fastest practical o/c'd box you can get.
 
Sounds like fun Damo.

I run a lot of distributed computing apps and consider stability of critical importance. For me, 100 or so loops of a linpack test (search "Intel Burn Test 2.5") at a high setting is as close to 100% certain you can get.

Enjoy.
 
I don't agree that something would return a different figure just because it was overclocked. If there was data corruption from an overclock there would be something more catastrophic than a wrong figure computed.

Nope, it could literally just be one wrong operation. Which is why linpack is so sensitive to failure, as it solves many thousands of simultaneous equations many times.
 
If I recall correctly, in Windows NT one could manually set an affinity between an application thread and a CPU core. Something Linux must be capable of?

Although your mention of a heavily modded nix environment makes me doubt that :)

Regarding CPU - you'd be better off with the 2500K, the hyper threading of the 2600K is wasted money, especially with a 50 unit volume.

Cooling. Air coolers are limited by the ambient temps of the room. If the room is kept cool, the Noctua would be the perfect choice. If not, the H70 may be better.

As many have mentioned, 4.5Ghz stable is more likely, a small percent of chips reach 5Ghz. From Asus testing labs:

Results are representative of 100 D2 CPUs that were binned and tested for stability under load; these results will most likely represent retail CPUs.

1. Approximately 50% of CPUs can go up to 4.4~4.5 GHz
2. Approximately 40% of CPUs can go up to 4.6~4.7 GHz
3. Approximately 10% of CPUs can go up to 4.8~5 GHz (50+ multipliers are about 2% of this group)

Additionally it is recommended to keep C1E and EIST option enabled for the best overclock scaling. This is different than previous Intel overclocking expectations where the best scaling was with disabled power states or power management options.

Full official guide here. You may find it very useful.
 
Cooling. Air coolers are limited by the ambient temps of the room. If the room is kept cool, the Noctua would be the perfect choice. If not, the H70 may be better.

Oh yes I was going to mention this too. It's very likely to be in a climate controlled room yes? In which case any decent air cooler will do a good job.
 
Back
Top Bottom