Overclocked 5Ghz Sandy Bridge datacentre build out...

ECC is designed to ensure errors don't creep into results - something that non-enterprise CPUs and Memory don't offer (especially when overclocked) - it's there for a very good reason!

Possibly there is an argument to suggest that in order to for this company to have a competitive advantage, they need to do things differently from everyone else. If they run a traditional Enterprise setup, it will be too slow, in the sense that it won't separate them from more established, less agile operators in the same markets.

Lots of companies over the years have been very successful by taking risks and doing things that rivals aren't. Take OcUK for example, they were one of the first UK retailers to sell guarenteed overclocked processors. Not a 'by the book' approach to the industry, but it gave them an edge and thus I became a customer, for over 10 years.
 
Read through the thread and very interested to see what happens. As for sandybridge I had a 2500k and no matter what couldnt get it prime stable for more than a couple of hours even by increasing the vcore over .15v when it was linx/intelburntest stable. Got myself a 2600k and working on 5ghz stable at 1.4v going very well. Has been said previously that not all processors are stable over 4.6ghz and have read this various places.

One thing I would be interested in trying since you only need 1 core, is keeping 3 at stock multiplier and only overclocking the most stable core (this can be done on all the asus p67 range I think, well does on my p8p67 deluxe) this way you can find your best core that also runs on the lowest voltage.
 
Possibly there is an argument to suggest that in order to for this company to have a competitive advantage, they need to do things differently from everyone else. If they run a traditional Enterprise setup, it will be too slow, in the sense that it won't separate them from more established, less agile operators in the same markets.

Lots of companies over the years have been very successful by taking risks and doing things that rivals aren't. Take OcUK for example, they were one of the first UK retailers to sell guarenteed overclocked processors. Not a 'by the book' approach to the industry, but it gave them an edge and thus I became a customer, for over 10 years.

Good point, however we live in a no risk these days it seems and those that are prepared to are sometimes seen as crazy or even heretics to religion of health and safety! Good luck to the OP though, with risk comes reward...
 
One thing I would be interested in trying since you only need 1 core, is keeping 3 at stock multiplier and only overclocking the most stable core (this can be done on all the asus p67 range I think, well does on my p8p67 deluxe) this way you can find your best core that also runs on the lowest voltage.

Excellent tip - will DEFINITELY try this. Many thanks! :)
 
Get read C++ devs to sort you out multi threading.. or at least let the program scale. Could use a VM to utilise other cores..

No watercooling in data rooms imo. If clock core really is so crutial - I think it's laughable OC'ing 2600k's for business purposes (for a 10000-100000% gross). Better off keeping with more 26ks/second hand i7s in flat atx cases stacked up.

Or in a large skeleton rack, with pull out shelves. Order 5 ram/cpu/mobo bundles to start with and add as you gain capital.

PRETTY SIMPLE IF YOU THOUGHT FOR YOURSELF-

You may want to check out this article. The company provides high performance servers for financial applications.

Also I seem to remember reading about IBM rolling out overclocked watercooled systems. I found this article here. While that article details super computing applications, you may as want to check out this article which details commercial applications from HP etc. You can read more about HP's solution here.

In any case I would highly avoid going with a custom solution given the fact that support contracts are not geared up for the enterprise. Going with a proven solution that is backed by a reputable brand would be much better, or as suggested above, work harder on making the program multi-threaded.
 
Last edited:
There are some specialist companies providing pre-overclocked servers precisely for the kind of stuff you're doing. I read some articles on The Register recently.

Really sounds like fun. Seems a shame your colo host won't allow water. There are some specialist racks designed for water. IBM do one which is an interesting twist in that the water-cooling is not for the server but to cool the air leaving the server so as to reduce the air-conditioning bill for the datacentre itself.

Article here
 
4.7-4.9ghz on an i7 2600k should really be achievable realistically at good thermal levels, +1 for air cooling and the Noctua nh-d14, those small closed loop water coolers never work as well as you think they might and are at best in line with best air solutions.

Chipset temps are not going to be an issue either so don't worry about any £300 boards lol.

If this whole deal is latency dependant, don't you want better ram than you have selected? latency wise I mean ie cas 6
 
Hey there all :)

N00b here, so appreciate all pointers. Hoping you ninjas of overclocking might help. ;)

Basically I work for a financial services company, and we have some pretty important business applications that make us cash that are quite simply, limited by CPU speed. We presently have these deployed on a lot of Xeon servers running at 3.46Ghz, however we're very keen on looking at implementing them on overclocked Sandy Bridge platforms in a data centre environment. Probably looking to deploy about 50 or so of them.

A brief outline is that they run Linux so no video or GPU processing is important, along with us looking to try and achieve a stable overclock as high as possible. This last point is the whole point - basically, raw clock speed (and stability) is what's most important to us, and price isn't an issue, so what I'm hoping you might help with is combinations of hardware you've used that has given you a consistent and stable OC in the 5Ghz range. Thus far I'm thinking something along the lines of: -

Intel Core i7-2600k Sandy Bridge processor
Asus Maximum IV Extreme Intel P67 motherboard
Corsair Dominator 4GB (2x2GB) PC3-16000 (2000MHz) ***OR*** G.Skill RipJawsX 4GB DDR3 PC3-17000C9 2133MHz Dual Channel Kit

In this case I'm thinking cooling is going to be totally imperative, and haven't got much of an idea on this front, however have been thinking that the Corsair H70 is potentially something that might be of use? Unfortunately the data centres we're looking at won't let us put a full hydro setup in there, so something which is a self-contained hydro solution might be a good approach, then again space considerations to be able to put it on the mobo (if I go with the Dominator GT RAM with the fans on top) might come into play. Also, data centres are controlled environments - is an air solution possibly better than a hydro CPU cooler?

Anyway, I'm getting further and further into it. Again, appreciate all comments - thanks in advance for your help!

Cheers,

Damo :)

Sorry but are you nuts or just winding us up? You do NOT run business critical money making apps on a n overclocked platform unless you like being fired when anything at all goes wrong with it (regardless of it having anything to do with the overclock you WILL get the blame). :eek:

Additionally throwing processing power at **** code isn't cost effective - if the application is business critical get it optimised asap!
 
Last edited:
Andy, did you read any of the article I linked? Yes, they do run multiple copies of apps which are looking for tiny margins which may only exist for fractions of seconds. The faster your computer, the more chance it has of reacting to that nanosecond-long margin and make a trade. This is not your normal "business-critical" application. There will be lots of these, each one looking at the market as a whole.

In the money markets, this is normal behaviour. Open your mind.
 
I fear this will be of no use to the OP, and no one else will bother to read it. Good luck to the OP, minimising latency is a hell of a lot harder than maximising throughput. Nevertheless,

All computers make mistakes. Including server grade hardware with error correcting code. The vast majority of numbers cannot be stored exactly in memory, and are instead stored as two binary numbers of finite length, a mantissa and an exponent. Consequently the vast majority of operations accumulate a rounding error which is difficult to avoid. Next up, the code we write doesn't reflect reality all that well, in my case that manifests as truncation errors from ignoring part of Taylor's series. In the OP's case I suspect it marks the difficulty in modelling finances, which make for ill conditioned equations with discontinuous boundary conditions.

Even if my computer is running perfectly, and by sheer luck the hard drive doesn't make an error, and neither does the ram, or the cpu, I still have to balance truncation errors against rounding errors (and against computation time). As I'm not using server quality hardware, I get all the crap that comes with consumer grade as well. The hope is that the code I write succeeds in damping out spurious errors.

In summary, don't assume that computers are arithmetically perfect. They aren't. Even stock speed ones working as they're supposed to get things wrong.
 
Erm, no. 4.4Ghz is still ridiculously slow for us - to give you an idea, we've noticed an increase in our latency of around 5% with the difference in clock speed between 4.8Ghz and 4.9 - 4.4 doesn't even compare.

Erm,no the 12MB of L3 cache alone will mean IPC has increased over standard socket 1366 Core i7 processors(three times the L3 cache per core) and if you are overclocking(which seems daft considering the usage) you will be looking at a higher clockspeed too. BTW,the highest rated part is 4.66GHZ so it does seem you have not even read the article. You have forgotten the dual core Xeons are made on the same 32NM process as the socket 1155 processors. It also means you can use the existing infrastructure.

BTW,how long do you intend to test the stability of the overclocks??
 
Last edited:
IPC is inter process communication - something that is completely irrelevant to us as we're running a single threaded app. Also, the 12MB of cache is of interest, but only in instances where we're utilising more than one core for each instance of our application - i.e. 4 cores - 3 x our application, 1 dedicated core for OS.

As for stability, I'm running Intel Burn Test/Linpack for at least 8 hours.
 
Back
Top Bottom