Post your hard drive benchmarks!

The stripe size I was using for the testing above was 128k and both read and write caches where enabled (as far I as I know :))

I have a Fujitsu 15k SAS drive myself and that was maxing out around 107/8mb/sec on my Adaptec controller.. I did have an LSI model, but that wasnt so good..

Also trying to work out why I cant seem to get the scaling over 8 drives as well as I can upto 5.. From 5's result of 570mb/sec to 6's result of 615mb/sec, something isnt adding up to me...
 
The stripe size I was using for the testing above was 128k and both read and write caches where enabled (as far I as I know :))

I have a Fujitsu 15k SAS drive myself and that was maxing out around 107/8mb/sec on my Adaptec controller.. I did have an LSI model, but that wasnt so good..

Also trying to work out why I cant seem to get the scaling over 8 drives as well as I can upto 5.. From 5's result of 570mb/sec to 6's result of 615mb/sec, something isnt adding up to me...

Cheers, I'm thinking maybe my motherboard isn't up to the task (Nforce 650i chipset). Could you be hitting a limit on the PCI Express slot that the adaptec controller is connected to ?
 
Well a adaptor card will make a heck of a difference, but its all dependant on what your using it for. I was told for just through sheer throughput get an adaptor card with no processor or cache and using Windows (or whatever OS your using) and stripe it that way.. Apparently my controller is meant for Raid 6 and big ass arrays :lol:
 
Well a adaptor card will make a heck of a difference, but its all dependant on what your using it for. I was told for just through sheer throughput get an adaptor card with no processor or cache and using Windows (or whatever OS your using) and stripe it that way.. Apparently my controller is meant for Raid 6 and big ass arrays :lol:

I'm using a Dell Perc 5i at the moment (rebadged LSI model) and seen most people get similar results to you but mine look a bit low :rolleyes:. Maybe it would run better on an Intel chipset board (my current board is nvidia 650i).
 
I dont know if it makes much of a difference between chipsets as such but I know the different raid controllers can make a huge difference.

The LSI v the Adaptec model I have there was a fair bit of difference in some benchies, but others it was huge...
 
My 2*Samsung 64GB mlc SSD's on intel raid0

3402206838_a42db00df8_o.jpg
 
Not sure why its that high. The VM had plently of head room. Unless the software wasn't reading it correctly.

Andy

Seen this before on our VMs it was caused by the shares and DRS moving the Vm.

Up the cpu and memory shares to high and run it again it will be fine.
Our VM's disk are presented from a EVA8100 Array I will do a benchy tomorrow.
 
Good price/performance but it can't touch the 1Tb Seagate 7200.12 for read performance - tho thats almost £20 more expensive.
 
the 7k1000.b 1tb drives use 375gb platters actually, so the 1tb is 3 platter.

Yeah I read about a 2-platter drive, the 7200.12 1TB, but considering its the cheapest drive £/Gb on the market from the manufacturer who was first out with a 1Tb drive, I am still happy. It is afterall the fastest drive in my system still.
 
Write caching is enabled - AFAIK you need an intel controller to really get those silly high burst rates.

EDIT: Tried with it disabled and the benchmark was actually marginally worse.
 
Last edited:
Back
Top Bottom