netapp nfs benchmark

Ok so ive read the best practice guide and spoke to some vmware guys.

One guy suggested that i should set up the switches so that its one subnet per switch and remove the uplink cable between the switches.

So it would be like this in netapp.

controller 1:
2x 1gigabit nic -> 1 vif vlan 202 subnet 1 -> in to switch 1 -> exchange vol 1

2x 1 gigabit nic -> 1 vif vlan 202 subnet 2 -> in to switch 2 -> os store

controller 2:
2x 1gigabit nic -> 1 vif vlan 202 subnet 1 -> in to switch 1 -> exchange vol 2

2x 1 gigabit nic -> 1 vif vlan 202 subnet 2 -> in to switch 2 -> DMS file store and misc volumes

Then in esx.

vmkernel (vswitch 1) NFS subnet 2. 3 or 4x physical nic -> switch 2 only

vmkernel (vswitch 2) NFS-exchangeonly subnet 1. 2x physical nic -> switch 1 only


Down side is that we would lose switch redundancy and positive is that we are not sending storage traffic through crappy uplink.



At the moment i have both subnets goings over both switches. like this. I still might set up the second subnet on a another vlan, but not sure if that is correct or not.

controller 1:
2x 1gigabit nic -> 1 vif vlan 202 subnet 1 -> in to switch 1 & 2 -> exchange vol 1

2x 1 gigabit nic -> 1 vif vlan 202 subnet 2 -> in to switch 1 & 2 -> os store

controller 2:
2x 1gigabit nic -> 1 vif vlan 202 subnet 1 -> in to switch 1 & 2 -> exchange vol 2

2x 1 gigabit nic -> 1 vif vlan 202 subnet 2 -> in to switch 1 & 2 -> DMS file store and misc volumes

Then in esx.

vmkernel (vswitch 1) NFS subnet 2. 3 or 4x physical nic -> switch 1 & 2 only

vmkernel (vswitch 2) NFS-exchangeonly subnet 1. 2x physical nic -> switch 1 & 2 only


Question:
Is there any way we could stack two switches together, i know that stacking modules cost £500 per switch but is there a minimum of three switches required for stacking? Is it even possible to stack two switches?

Any comments on this set up?
 
Last edited:
Stacking the switches is almost a no brainer because it gets you what you want to achieve in the switch redundancy.

By the way, LACP and plain old EtherChannel are precisely no different in terms of bandwidth. LACP isn't some magical load balancing layer on top, it just avoids some misconfiguration bloopers from costing you your job. The load balancing is still per source/destination pair and not per packet or any other metric.
 
Update on this project.

I got the stack modules and stacked the switches. They are now fully redundant and i can do a netapp takeover and turn off one switch and all 4 datastores remain operational.

I set it up so that each esx host has 2x vswitch. Each vswitch has 2xphysical nic. Each vswitch accesses one datastore per controller. Each controller has two datastores (at the moment). I put all the datastores on the same vlan but two of the datastores on a separate vlan, one per controller. This is to allow for the netapp nic partnering. Then I setup vmotion on its own vswitch with two physical nics on no vlan, going in to each switch. I had to create a seperate vmkernal for each nic and set one nic active/passive.

I put exchange datastores on their own subnet and vmware guest os store and dms filestore on the other subnet.


Would their be any benefit to adding additional nics to the vswitch that handles the os store and dms datastore traffic?

Any other feedback?
 
Unlikely to see any benefit by adding more NICs to the vSwitches unless you are seriously, seriously paranoid.

How many NICs per controller? Presumably as its a 2240 you just have the four base 1GbE interfaces per controller? One ifgrp of all four NICs into the switch with two VLAN sub-interfaces or two ifgrps, one each per iSCSI VLAN?

From the NetApp side, if you have two aggregates then what you have described makes some sense. If you only have one aggregate then your multiple VLANs are pretty much just theatrical and won't be gaining you anything at all as far as I can see.

With those two questions set aside, you've probably got things about as good as you're going to get without spending more cash.
 
Back
Top Bottom