How to transfer ~400MB/sec between 2 computers?

Associate
Joined
8 Mar 2004
Posts
409
Location
London, UK
Hi there,

What are my options if I want to connect two computers with a pipe capable of about 400MBytes/sec? Are my only two options 10-gigabit Ethernet and 4Gb Fibre Channel? Is it possible to get PCI-Express 10-gigabit Ethernet NICs in the UK? If so, how much are they?! (I can only find PCI-X NICs which start at about £600).

(The reason I'm asking is because I'm researching the posibility of building a system for editing uncompressed high definition video and I'd like the storage to be in a dedicated Linux box connected to my WinXP workstation. The dedicated Linux box would run a software RAID-6 array and also perform regular backups).

Thanks,
Jack
 
Hi,

Thanks all for your very swift replies.

sniper007 said:
Are you rich?

No! Which is kind of why I'm considering this "Linux RAID box" plan. Here's my thinking...

I require a RAID array capable of 400MBytes/sec and I'd like to use RAID6 because RAID5 isn't resiliant enough and RAID 10 uses too many disks. To get enough speed, I probably need about 12 disks.

Why build a dedicated Linux box to run my RAID? Several reasons:

1) I've heard that if you dedicate a computer to the task of running a RAID array then you can easily get away with "software RAID". i.e. I don't need to spend hundreds of pounds on an expensive dedicated RAID controller. In fact, I've heard it said that a Linux software RAID sollution can actually be faster than a dedicated RAID controller because modern CPUs are considerably faster than the APICs found on most dedicated RAID controllers.

2) I've got enough spare parts kicking around to mean that I can build a decent Linux box without having to spend very much money.

3) 12 disks are quite a lot to house. I could stick a RAID card in my WinXP workstation and have those 12 disks in an external disk enclosure but that starts to get expensive. I'd prefer to have a large (maybe 19" rack-mount) computer case with a mobo running Linux and a bunch of disks.

4) With a dedicated Linux box, I can also use that box for other tasks like tape backup and I can also teach Linux to do clever things like e-mail me whenever a disk kicks out a SMART error.

Thanks,
Jack
 
growse said:
The final option is to buy lots of Gbit ethernet cards and bond them all together. You might be able to get 2 4-port Intel Gbit cards, run 4 bits of cable between the boxes and bond them together giving you 4Gbit, but having never played with bonding I don't know if it works like that.

That sounds like an interesting idea - thanks. Are there any open source / free / cheap bonding drivers for WinXP (or, even better, can it cope with bonding out-of-the-box)?

Also, I had a look at ATA-over-Ethernet but I couldn't find any WinXP software for doing ATA-over-Ethernet. Do any WinXP drivers exist?

Oh, I forgot to mention another big reason for looking at SAN: one day I might have more than one workstation. For example, I might have three computers running in my office: 1 linux RAID box, 1 WinXP workstation and 1 MacOS workstation. I'd like to let both workstations see the files on the Linux RAID box. Hence the thinking that ATA-over-Ethernet may be the way to go for me.

But maybe things are getting too complicated and I should just go for a RAID controller card for my WinXP workstation and forget all this ATA-over-Ethernet stuff!!

Many thanks,
Jack

edit here's a wikipedia page on bonding: http://en.wikipedia.org/wiki/Link_aggregation
 
Last edited:
Hi guys,

Thanks loads for your replies. I must admit that I'm getting confused!

growse said:
ATA-Over-Ethernet is actually SCSI over ethernet and is called iSCSI. This allows you to do block-level disk access over a network.

As I understand it (and I certainly might be wrong), there are two SAN technologies: SCSI-over-Ethernet (iSCSI) and ATA-over-Ethernet (AoE).
http://en.wikipedia.org/wiki/ATA_over_Ethernet

I presume it would be ineffecient to use an iSCSI connection if the disks are SATA disks because you'd want to use the same protocol end-to-end hence AoE would be better than iSCSI if you're using ATA disks.

growse said:
The (iSCSI) client software is free (from microsoft)

Ah, cool - that's very interesting, thanks. Does MS do a free AoE client too?

growse said:
However, if you don't need a SAN specifically, a NAS might be better as iSCSI has a few more overheads compared to just plain NAS (windows file sharing + samba).

Again, I'm probably wrong but I thought the whole point of iSCSI or AoE was that it was more efficient (and hence faster) than a NAS? A NAS requires several layers including the IP stack, the OS filesystem at each end etc. A SAN does away with IP and, at the target end, the OS doesn't have to do much thinking at all.

growse said:
To me, it sounds like a SAN won't work, unless you go fibre-channel, and won't work if you want more than one person to access the data. I'd investigate the ethernet bonding route and see if you can get a 4GBit link using a pair of 4-port Gigabit cards.

Why would fibre-channel SAN work where as an Ethernet SAN won't? Is it because FC can do up to 4Gbps on a single link?

Tui said:
Bonding won't do what is wanted. Multiple channels are used to load share based on source, destination or both (MAC or IP address) and this determines which channel is used. Traffic between the two same points will always go down the same channel so the maximum throughput will be the speed of that channel.

Oh, bother. But now I really am confused... what you've just said sounds different to what the WikiPedia entry on Link Aggregation says:

"Link aggregation, or IEEE 802.3ad, is a computer networking term which describes using multiple Ethernet network cables/ports in parallel to increase the link speed beyond the limits of any one single cable or port, and to increase the redundancy for higher availability... Network interface cards (NICs) can also sometimes be trunked together to form network links beyond the speed of any one single NIC. For example, this allows a central file server to establish a 2-gigabit connection using two 1-gigabit NICs trunked together."

So, to take the WikiPedia example of a server with 2 x 1Gbps NICs... will the server only hit 2Gbps if there are at least 2 clients pulling data off the server?

Thanks loads for all your help,
Sorry for questioning the replies - I'm just trying to get a complete understanding of this technology.
Jack
 
Hi!

Thanks loads for the quick reply - that's cleared up a lot of my questions, thank you.

Cool - I will look deeper into building a Linux NAS box rather than trying to do some sort of Ethernet SAN.

What sort of speeds do you get on your Gigabit NAS?

Thanks,
Jack
 
OK, thanks. But isn't load ballancing something separate from channel bonding? I'm pretty sure I have heard of people bonding multiple ethernet connections to gain more speed, even for a point-to-point connection like the connection I need.
 
Hi Matja,

Thanks loads for the reply. What OS were you using? I guess Linux? Were you connecting two computers together with your 4xgigabit pipe? i.e. were you getting 300MB/sec between two machines? Or were you connecting a server to a bunch of clients?

Thanks,
Jack
 
Back
Top Bottom