My server/nas build

Permabanned
Joined
15 Nov 2011
Posts
1,156
Here it is...

WD20EARX X6
Asus P8H77-I
Intel I5 2400
Crucal Ballistix Sport 8GB 1600Mhz
USB internal header to USB female
4GB USB
Fractal Design Array R2
500W semi modular PSU
Intel PCIE 1x quad network card

Yes 6X 2TB drives. RAID 5, NAS4Free, 2 gig links for up and 2 for down.

Please note this using up old parts. Least there will be plenty of power as i plan to play with the server for learning and VM etc.

My only worry is the Intel quad nic is no supported in NAS4Free. It would free up the PCIE 16x slot for raid card expansion or SSD.

Im just gathering the parts from cleaning and will post some pics as i build.

Any advise?
 
2 links for up, 2 for down? How are you achieving that?

If i use the Intel Quad Nic i have a Dell Powerconnect 2724I gigabite l3 managed switch. If the software for the nas allows it i can use the quad nic and set the switch for the ports & ip of what i want.

Soim told, this is all new
 
THe best you can hope for is an LACP trunk. While this will give you an aggregate 4Gbit/s duplex connection, you won't actually be able to get that from a single client/server pair - you'll be limited to 1Gbit/s.

The real advantage comes when you have a lot of machines accessing that server because the more machines you have accessing it, the better the load balancing will be across the links.

In any case, I doubt you'll actually need that bandwidth anyway.
 
As DRZ says, link aggregation does not combine all 4 links to form a 4GbE link but allows all of the 4 connections to be accessed under a single IP address and the switch / server will use 1 link at any one time to communicate between the NAS and another machine.

You cannot allocate a link to be in only and another to be out only.

What is the model of the Intel nic as to my knowledge they do not do any PCIe x1 quad ports. They are usually PCIe x4 or x8. My Intel quad ETis a PCIe x4 for example. The Intel CT is a PCIe x1 but it has only one port.

If you have not got the i5 then take a look at the E3-1225 v2 or E3-1245 v2 (there is no 1235 v2) as these are Xeons but are around the price of the i5. The H77 board should be able to support but just confirm by looking at the supported CPU list.

I personally do not like WD Greens in a storage environment and especially not in a raid 5 setup. Go for WD reds or Seagate Barracudas if you can stretch to it.

The rest looks reasonable.

What virtualization software are you looking at using.

RB
 
Softwear no idea, open to advice here.

WD's i have already. I seen they can be flashed or modded to be better.

Again the i5 2400 was left over so might as well use it as was the Asus H77

Sorry nic wise i stand correct. I think it's 4X

my-0yt674-12402-8cg-00wl Only code on there

Look exactly like this - http://i.ebayimg.com/t/Dell-Intel-P...Q=/$T2eC16JHJG8E9nyfmIDKBQWzFqPeRQ~~60_57.JPG

Ok, so the NIC is an Intel VT Quad, probably the GbE version which is generally OEM only.

Good luck on the Greens. I am sure some are using them without an issue but I had 5 fail on me and a raid 5 killed them within 3 months. They were an early model though.

Also have a google and read on the 'Raid 5 write hole'.

RB
 
Last edited:
NAS4FREE has comprehensive NIC support, I've currently got a quad SUN NIC and Broadcom single SUN NIC working. Wintel OS's I've been unable to find drivers at all for either. Niether work with ESX 3.5, haven't tried ESX 5.0 yet.

Trick to get a round LACP limitation is to connect multiple iSCSI drives from NAs4FREE through multiple ISCSI paths and then strip the disks at the OS level.
 
Ok, so the NIC is an Intel VT Quad, probably the GbE version which is generally OEM only.

Good luck on the Greens. I am sure some are using them without an issue but I had 5 fail on me and a raid 5 killed them within 3 months. They were an early model though.

Also have a google and read on the 'Raid 5 write hole'.

RB

Sounds like the NIC. It was £30 so bargin as to what i see them listed for on eBay.

I'll have a google now
 
Ok, so the NIC is an Intel VT Quad, probably the GbE version which is generally OEM only.

Good luck on the Greens. I am sure some are using them without an issue but I had 5 fail on me and a raid 5 killed them within 3 months. They were an early model though.

Also have a google and read on the 'Raid 5 write hole'.

RB

The thing with the WD Greens is their spin down time which means in a NAS/Server they spin themselves up and down way to often, something like every 8 seconds by default, and this cycling can kill them off pretty quickly. It is possible to reset the time out for this via a DOS tool. I have several 2GB WD Greens which have been happily running in NASes for quite some time with this fix done on them (although using RAID 1 not 5 and I'm not stupid enough to not have a separate backup as well).

That being said if I was to buy them again now I'd get WD Reds (which didn't exist at the time) ...

What I am a bit confused about is that the OP says he wants to look at virtualisation but then goes on about NAS software. Surely he'll need to run ESX (or equivalent) on the hardware so the compatibility with things like NICs is important against that, rather than the NAS software which he would be running in a VM under it ...
 
Im new to it all so any advice will hel and this is why all might be confused a little.

I have 6 2TB drives. Lets look at the price of replacing them and why i'm not going too...

The mod for the spin i might well carry out
 
The thing with the WD Greens is their spin down time which means in a NAS/Server they spin themselves up and down way to often, something like every 8 seconds by default, and this cycling can kill them off pretty quickly. It is possible to reset the time out for this via a DOS tool. I have several 2GB WD Greens which have been happily running in NASes for quite some time with this fix done on them (although using RAID 1 not 5 and I'm not stupid enough to not have a separate backup as well).

Yep I am aware although it was not publicly well known at the time I got my drives. Mine all failed with bad sectors and I have had two others fail which were never in an array.

That being said if I was to buy them again now I'd get WD Reds (which didn't exist at the time) ...

Yep although from what I understand, the reds are blues (stable drives in their own right) with the TTLR settings tweaked.

I have 8 Seagate Barracudas in two raid 5 arrays but they are hanging off a HP P812 with 1 GB FBWC and they are not pushed hard at all as it is for home and VM lab storage.

RB
 
Trick to get a round LACP limitation is to connect multiple iSCSI drives from NAs4FREE through multiple ISCSI paths and then strip the disks at the OS level.

Interesting idea although you risk introducing more risk by compining multiple drives on the consumer system. You also need to make sure that the consumer has the bandwidth to make use of the providers LACP setup.

Having a 4x 1GbE LACP on a servers and a desktop with 1x GbE is going to be limited to 1GbE. Having a desktop with a 4x 1GbE setup and LACP would still only give 1GbE as the workstation is not aggregating the links in to a fat pipe. The only way I can see of doing this and making it work reasonibly well is by putting each port on the server and workstation on a separate subnet (physical or VLan) so there is only one route to each of the the iSCSI targets via a single port on the desktop. Adds quite a bit more complexity although you could just directly plug one port from the server into one port on the desktop negating the need for a switch but then that takes away the shared storage concept and you may as well connect directly with infiniband or FC.

RB
 
Back
Top Bottom