eSata Storage Enclosure

A friend needed storage before me and ask me to build -so i got a get out of jail free (build his first and see how it goes)
He opted for a 10 bay netstor, with 8 2tb samsung f4s and a Highpoint Rocketraid 622. £950 delivered.
will feedback performance in raid 0, 5 read/write/rebuild.
The 10 bay netstor is 20% more than the 2xstartech bay for bay but is most portable, probably more energy efficient, possibly quieter and cheaper in the long run if a 9th or 10th disk is required.
The 622 specs suggest it can support 10 disks (so matches the netstor better than 2.5? startechs). and also that it supports online array capacity increase.
two 622s cost less than 1 644 so as long as you have 2 pcie slots and not doing massive data transfer between the two cards, or need a 20 disk array, are probably as good.
 
Last edited:
A friend needed storage before me and ask me to build -so i got a get out of jail free (build his first and see how it goes)
He opted for a 10 bay netstor, with 8 2tb samsung f4s and a Highpoint Rocketraid 622. £950 delivered.
will feedback performance in raid 0, 5 read/write/rebuild.
The 10 bay netstor is 20% more than the 2xstartech bay for bay but is most portable, probably more energy efficient, possibly quieter and cheaper in the long run if a 9th or 10th disk is required.
The 622 specs suggest it can support 10 disks (so matches the netstor better than 2.5? startechs). and also that it supports online array capacity increase.
two 622s cost less than 1 644 so as long as you have 2 pcie slots and not doing massive data transfer between the two cards, or need a 20 disk array, are probably as good.

Ok, so you are looking at the Highpoint 622 (2 eSATA ports on PCIe 1x - 500MB/s, 10 drives), hooked up to a 10 bay dual eSATA connector Netstor case. The case is just a 'dumb box' with hotswap bays and the two links mean that you will be able to pull around 60MB/s from each drive if all are running simultaneously (SATA II = 300MB/s * 2 links / 10 drives). The PCIe connector will be the potential bottleneck only allowing 500MB/s. As long as the unit will be unlikely to house SSDs then it should be fine but will not get top speed.

Connecting two 622s will negate the PCIe bottleneck and give you 4x eSATA ports, so two for future expansion but, as you mention, will use two PCIe 1x slots.

The 644 has 4 eSATA ports and uses the PCIe 8x so doubles the PCIe bandwidth per eSATA port which is why it is probably more than twice the price of the 622. It also only uses one PCIe slot but you need an 8x free to maximise its potential.

Be hard to build something for less including the 8 drives though :).

RB
 
thanks

The case is just a 'dumb box'

my reading into this, is that the dumber the box, the less likely there is to be an artificial ceiling on the disk size supported? wishful thinking? The manufacturer states no maximum disk size in their specifications.

When i am playing around benchmarking rebuild speeds, comparing raid 0 and raid 5 (haven't decided whether i want the parity or not yet), do i need data populated across the entire array to properly test the speed or is rebuild speed of an array independent of the data on the array? As long as the raid 5 read/write/rebuild isn't diabolical i will probably go raid 5. What benchmark tools are best ive seen atto being mentioned most recently.

Given the constraints of the pci bus and the 2esata cables, and the application i am happy to go cheap on the raid card. For fun i am guesstimating average MB/s across 8 disks as Raid 0: 300 read, 250 write, Raid 5: 300 read, 125 write; i will be pleasantly surprise if i get any where near the 500mb/s limit of the bus or more impressive raid 5 performance. (and disappointed if i cant sustain the native speed of an equivalent single disk ie 75mbs r/w).

any experience on using flexraid rathar than hardware controller?
 
my reading into this, is that the dumber the box, the less likely there is to be an artificial ceiling on the disk size supported? wishful thinking? The manufacturer states no maximum disk size in their specifications.

Yes, you are not being restricted by any raid capabilities on the port multipliers and are at less risk of failure. Dumb boxes are not always bad boxes ;).

When i am playing around benchmarking rebuild speeds, comparing raid 0 and raid 5 (haven't decided whether i want the parity or not yet), do i need data populated across the entire array to properly test the speed or is rebuild speed of an array independent of the data on the array? As long as the raid 5 read/write/rebuild isn't diabolical i will probably go raid 5. What benchmark tools are best ive seen atto being mentioned most recently.

Your stripe size will dictate how much data you need to have written in order to spread across the whole array with Raid 0 or 5. Put a movie on it and you will be on all the disks easily. I was looking at alternatives for you based on various parts including import bits from the US and one SATA bridge (1 eSATA -> 5 int SATA) reported a raid 5 rebuild speed of 200GB/hour. That should give you a ball part figure.

Given the constraints of the pci bus and the 2esata cables, and the application i am happy to go cheap on the raid card. For fun i am guesstimating average MB/s across 8 disks as Raid 0: 300 read, 250 write, Raid 5: 300 read, 125 write; i will be pleasantly surprise if i get any where near the 500mb/s limit of the bus or more impressive raid 5 performance. (and disappointed if i cant sustain the native speed of an equivalent single disk ie 75mbs r/w).

any experience on using flexraid rathar than hardware controller?

2/3 -> 3/4 is usually a good speed for raid 0 from what I have seen. Depends on usage and data of course. It also depends on where you are copying the data too. If the destination is a single 7.2k drive then you will not be able to run the array at full speed. This makes benchmarking the arrays tricky for real world usage as the results depend on both ends of the data chain.

I have used Windows software raid since Win2k on and off. I have never had a problem with it that was not a hardware issue. Of course raid 5 means your processor is doing the parity calculations.

I just did a quick costing and for around the same price you could get an 8 bay case (non-hotswap) with the 8 drives you mentioned, SAS cabling and an IBM M1015 SAS controller (modelled on the LSI 9240), with cables and external connectors. Needs a bit of work and luck with what is available and postage costs may need to be added but should not be too much more. Advantage of drive to case bandwidth up to 2.4GB/s, PCIe 2.0 card 8x giving 4GB/s bandwidth and the card does raid 5 with the activation dongle (included in price). Disadvantage, no hot swap and two less drive bays). Of course if I built it then it would be a bit more as I am building as a resale company but that is more or less the price you could probably do it for by yourself.

RB
 
8 drives in RAID5 is risky enough.
8 large 7.2k RPM drives in RAID5 is pointless. You will get corruption during the rebuild window (24 hours for 1TB, 48 hours for 2TB as a rule of thumb).

precipitated by the risk of 2nd hdd failure, or is there a more common corruption in the normal course of storage?

ive only ever had one hdd fail ever on me, and that was a beat up external drive. they dont fail that often?
 
Risk of 2nd HDD failure more than anything else, especially if the HDs are from the same batch (sequential serial numbers).
RAID6 is a small price to pay for avoiding such a problem.

Agreed, raid 6 is a better idea but is usually fairly limited to more high end cards as they need to calculate twice the parity info that raid 5 requires.

It is all about bang for the buck and the cost of loosing access to and then recovering the lost data. 16TB is a rather large amount of storage. What about multiple smaller arrays ginjaninja ?

In this case though, as it is for a 'client', if that is what the client wants and is paying for and they know the risks then so be it.
 
thanks for advice,
the raid 5 parity is a 'near-line' piece of mind level of redundancy. I will have other more drastic options if i ever lost the entire array.
the 622 supports hot spare, so the risk window will be minimised to c. 10 hours at 200gb/hr rebuild. The risk of two hdd failures/cost to mitigate is too small/large compared to effort of rebuilding, so i can live with that.
smaller arrays soak up additional parity disks, hot spares and drive bays, so again given the risk/cost balance i think i will go with one. I wont come crying to you when it goes wrong :-) i have been warned.
 
any recommendations on formatting settings /disk / array settings for a raid array on a 622.
dont know all the options, but recall there be choices in the os (sector size, mbr or gpt) and raid bios settings (stripe size).

im thinking maximum stripe size supported by controller in array as 99% (by MB stored) of the stored data will be in files > 5GB. not sure about cluster size of partition in os.
 
I wont come crying to you when it goes wrong :-) i have been warned.

No please do. Don't worry, I am sure there will just be some discrete tutting in the background ;).

might have to go with two arrays, 5 of my disks are samsung f4 203 and 3 would be samsung f4 204s, dont want to mix hdd models do i?

Better to match the drives with like drives of the same make model and firmware for best performance and reliability of the array.

any recommendations on formatting settings /disk / array settings for a raid array on a 622.
dont know all the options, but recall there be choices in the os (sector size, mbr or gpt) and raid bios settings (stripe size).

im thinking maximum stripe size supported by controller in array as 99% (by MB stored) of the stored data will be in files > 5GB. not sure about cluster size of partition in os.

Usually you just need to set the stripe size if anything at all. For large files a larger stripe size is fine as you will not be loosing lots of space by putting little files in big sectors. Of course with a 64k stripe size and 10k jpg movie thumbnails you will be loosing 54k as 64k is the smallest stripe (block) of data you can use. Depends how many thumbnails and coversheets or the like you also have on the drive.

An oldish but still relevant article from Toms Hardware here (link to stripe size part). Of course the conclusion is to experiment :D.

RB
 
i sense an 'i told you so' moment.
Turns out the netstor literature on a quiet device is rubbish.
The device is a little too noisy for a living room device. There are no other products that come close to its price point other than the startechs i can see.
Price for the sole uk distribution has jumped 48 pounds.
Looks like i will have to build my own.
The 622 seems good enough, I can copy at 75mb/s second, whilst the array is initialising and watch a film without stutter. Hoping the performance will increase once the array has initialised..says it will take 24hrs to initialise the 16TB array!
 
i sense an 'i told you so' moment.
Turns out the netstor literature on a quiet device is rubbish.
The device is a little too noisy for a living room device. There are no other products that come close to its price point other than the startechs i can see.
Price for the sole uk distribution has jumped 48 pounds.
Looks like i will have to build my own.
The 622 seems good enough, I can copy at 75mb/s second, whilst the array is initialising and watch a film without stutter. Hoping the performance will increase once the array has initialised..says it will take 24hrs to initialise the 16TB array!

I did wonder with all those tiny fans :).

Maybe take a look at some cases like the "CF-1091 Black Steel" (9 bay 5 1/4" bays) and google for SATA bridge boards to find the internal SATA to eSATA converters.

RB
 
Maybe take a look at some cases like the "CF-1091 Black Steel" (9 bay 5 1/4" bays)

see this one has a 80mm fan too.

Apparently the fans on the netstor are very accessible with a standard 2 pin connector. Are there adapter thats could run them at a lower rpm? or perhaps its just cheaper/easier to replace with quieter fans...any suggestions?
 
Personally I would get the dremel out, cut a couple of holes and put 120mm fans in ;).

Failing that you can look out for quiet 80mm fans (I have a couple in my server) or get a fan speed controller from overclocking places (possibly OCUK but have not checked) and then find somewhere to mount it.

With a little work, the dremel would probably get the best results. I recut my server fanwall to use 120mm quiet fans rather than the 80mm fans that came with it and it is so much quieter and not too difficult to do.

RB
 
Netstor chasis- 8xSamsung F4 HD204UI - Rocket Raid 622 - (64KB blocksize, 4kb sector on 622, 4kb cluster / GPT on OS) read and write is between 140 and 160 mbs according to hdtune accross the entire 14tb raid 5 with a 16ms access time. So not as fast read as i was expecting but faster than a single disk. Since the values are so close, perhaps limited by the controller or the chasiss port multipliers.

The heat is non existent (disks are at 82F) when the disks are running at full tilt, so will try disconnecting some of the hotswap fans before replacing them for quietness.
 
Last edited:
that was a bust, the noise is coming from the psu and the air draw it produces. I removed all the hot swap case fans and no change to the noise so the case fans are quiet it would seem. The psu is not. looks like a 25 mm fan on the psu, Need to find a quiet 300w ipc form factor psu!
room is too cramped for larger case fans but this doesnt seem to be the issue.
 
couldnt find a quiet psu for the netstor. decided to change tack for my build
All prices inc Vat
SFF8087 to 4 Sata QTY 2 = £24.96
RocketRaid RR2720SGL 8 Port SAS/SATA III RAID Controller QTY 1 = £125.98
Be Quiet Dark Power Pro 750W PC Power P9 BN174 QTY 1 = £137.57
Fractal Design Define XL Full Tower Case QTY 1 = £104.99
Total = £393.50.


Controller only has 8 ports (but controllers start getting a lot more expensive after 8 ports)
Case doesnt support hotswap, led indicators (not really required as long as i label up)
i am pretty sure Case will be silent.

Phase 1 - run htpc and storage case at same time, feed 1m sas/sata cables to storage case from controller.

Phase 2 - use the new storage case/psu for a new build i was planning anyways. save £250

all i need now is a good value/quiet 5.25 bay to 3.5 hdd converter to support a bit more HDDs storage in the fractal design, when required, given a future motherboard will support a 4-6 disk raid 5 array as well and the fractal design only has 10 bays.

any thoughts please, will these parts work together?
Any other good quality cases with 10+ hdd bays?
 
Back
Top Bottom