Storage question

I don't see the point of presenting your various NFS volumes on separate IP addresses and different interfaces.

I'm guessing 10.20.2.x is one netapp controller (I'll call it 01) and 10.20.3.x is the other (I'll call it 01b). You said these are in a HA pair which is great.

I'd present all volumes on a controller on a single IP address. I'd also have both controllers on the same subnet for simplicity too.

I'd create 2 four port trunks on your switch stack. Trunk 1 would be 2 ports on member 1 and 2 ports on member 2. Trunk 2 would be the same, different ports obviously.

Netapp01 goes in trunk 1, 01b in trunk 2.

This config gives you maximum bandwidth, is easily managed and makes your ESXI config easier because you've only 1 trunk to your storage to set up, and adding another data store for VMWare is just a matter of creating the LUN in OnCommand, exporting it and adding it in VSphere. You retain switch redundancy and controller redundancy.

Your setup is more complicated than it needs to be because unless I'm missing something, you've got no more resilience, less bandwidth, and more admin overhead than the much simpler config I've suggested.
 
I don't see the point of presenting your various NFS volumes on separate IP addresses and different interfaces.

I'm guessing 10.20.2.x is one netapp controller (I'll call it 01) and 10.20.3.x is the other (I'll call it 01b). You said these are in a HA pair which is great.

I'd present all volumes on a controller on a single IP address. I'd also have both controllers on the same subnet for simplicity too.

Simplicity, maybe. Less bandwidth though in this environment due to the way EtherChannel load balances so few sources/destinations.

I'd create 2 four port trunks on your switch stack. Trunk 1 would be 2 ports on member 1 and 2 ports on member 2. Trunk 2 would be the same, different ports obviously.

Netapp01 goes in trunk 1, 01b in trunk 2.

This config gives you maximum bandwidth, is easily managed and makes your ESXI config easier because you've only 1 trunk to your storage to set up, and adding another data store for VMWare is just a matter of creating the LUN in OnCommand, exporting it and adding it in VSphere. You retain switch redundancy and controller redundancy.

And deprives him of 50% of the CPU and cache of the system he has purchased, while giving him less overall bandwidth in his particular environment. He's not using LUNs either - there is no iSCSI here, so MPIO is out - he is using NFS. Using iSCSI would probably be a reasonably good idea in some regards with a setup like this but NFS is quite nice in a way.

Your setup is more complicated than it needs to be because unless I'm missing something, you've got no more resilience, less bandwidth, and more admin overhead than the much simpler config I've suggested.

You've missed quite a bit IMO :)
 
I'm not familiar with Cisco Etherchannel, but I don't see anything that needs load balancing. There's one source and one destination. Unless Etherchannel does something magical, I don't see how adding some extra IP addresses will add bandwidth. It might increase throughput, but I'd guess it's marginal.

I don't understand how my config deprives him of 50% CPU and cache? He's got his disks split between 2 controllers, so both are live but will fail over if one dies.

I meant volume on the NetApp when I said LUN, we use our NetApp for both and we've slipped into calling volumes LUN's at work.
 
Last edited:
I don't see the point of presenting your various NFS volumes on separate IP addresses and different interfaces.

I'm guessing 10.20.2.x is one netapp controller (I'll call it 01) and 10.20.3.x is the other (I'll call it 01b). You said these are in a HA pair which is great.

I'd present all volumes on a controller on a single IP address. I'd also have both controllers on the same subnet for simplicity too.

I'd create 2 four port trunks on your switch stack. Trunk 1 would be 2 ports on member 1 and 2 ports on member 2. Trunk 2 would be the same, different ports obviously.

Netapp01 goes in trunk 1, 01b in trunk 2.

This config gives you maximum bandwidth, is easily managed and makes your ESXI config easier because you've only 1 trunk to your storage to set up, and adding another data store for VMWare is just a matter of creating the LUN in OnCommand, exporting it and adding it in VSphere. You retain switch redundancy and controller redundancy.

Your setup is more complicated than it needs to be because unless I'm missing something, you've got no more resilience, less bandwidth, and more admin overhead than the much simpler config I've suggested.

So you would create one vif per controller, combining 4 interfaces in to one? It seemed to me that the disks were faster than the nfs protocol. When i run benchmarks running iscsi mpio you would get double the disk performance, but when running nfs it would seem capped. So to me it made sense at using more than one nfs connection per controller, if nothing else improve latency and throughput for that traffic. I can't realy foresee any more volumes that need creating as its a small site. So 4 volumes and 4 interfaces seems better than 4 volumes 2 interfaces. The esx does not support lacp so if you combine 4 interfaces in to one you are only going to get 1 gigabit as the esx is limited to 1 gigabit.
 
Sorry, what I said was wrong for your configuration.

Using multiple IP addresses will give you a performance improvement because NFS won't be able to use all of the capacity of the trunk to a single host using a single IP.
 
Sorry, what I said was wrong for your configuration.

Using multiple IP addresses will give you a performance improvement because NFS won't be able to use all of the capacity of the trunk to a single host using a single IP.

Between this post and your last post I think you got there.

It isn't specifically using multiple IP addresses (although that helps quite a lot) but having two port channels means you always have at least 2x the slowest link, rather than just one link of a 4-link bundle.

You can play around with the hashing algorithms on a Cisco switch (but not so much in VMWare) which changes this slightly but with so few sources and destinations you're on the back foot from the get-go.

When you get up to 10GbE and start using protocols like FCoE you are then bound by best practices and protocol limitations to using just two links in any case.

As for the 50% comment, I definitely read that as you having all the volumes on one controller and having the other just for failover. Reading it back, you could have just meant all the volumes on that particular controller - which I would agree with if it wasn't for the caveats I have already spoken about above :)
 
Back
Top Bottom