Storage question

Permabanned
Joined
28 Dec 2009
Posts
13,052
Location
london
What would you do when you need to present volumes to windows guest virtual machines for use for exchange and for use as a document file server.

a) connect the esxi host to the FSA with nfs and create a vmdk disk on the datastore and add it as a drive to the virtual machine.

b) install nfs in the windows virtual machine and connect directly to the san through the storage switches?
 
Netapp FAS2240-2, pair of stacked cisco 2960 for storage switches.

They both have positive and negatives, the data is backed up to tape via windows agents every day that go off site. Then we will also be replicating the netapp offsite with snap mirror. The exchange will be replicated with DAG, second exchange server at offsite location.

At the moment the netapp is configured with nfs to the 3xesxi hosts but the question is whether i should add virtual machine port group and then use windows nfs to connect to the netapp or if i should add a drive via esxi nfs.
 
Last edited:
The main reason is that the netapp is currently not connected to the main switch stacks, the management port is but not the actual data interfaces, only through storage switches and then to the esxi hosts.

I could add another vm port group and then go to the san through the storage switches, ie add a virtual nic to the virtual machine that connects to storage switches and then access the volume through nfs.

Having the netapp as a stand a lone would mean i would have to add the netapp data ports in to the main switch stack as users will have to be able to access the data, which they would do through the network of the server i am attaching it to.
 
I currently have exchange on nfs datastores that connects to the netapp via the esxi hosts. There is dual controllers the way i have it set up is that its fully redundant so we essentially only have 2 gigabit per controller to work with as the other two is for redundancy. I have two exchange databases, each exchange database sits on its own volume on its own controller this allows for 1gigabit nfs per database. I know its doesnt have to be set up this way, but exchange is their primary application so i have dedicated 2x1gigabit for exchange alone. Then i have 1gigabit on one controller for vmware OS datastore. Then the other gigabit connection i will use for the document management store. I have just gone ahead with adding the drive to a windows server on a nfs datastore connected to the netapp through storage switches. This way the storage traffic is isolated from the main stacks and the network traffic is through different interfaces. This should as far as i can maximise performance. My biggest issue with doing it this way was that i don't like the idea of putting millions of files and 800gb inside a vmdk disk. I spoke to some vmware experts on freenode who i speak to a lot and he strongly suggested putting it directly on the netapp for various reasons which turned out were not directly relevant to my environment, but still valid non the less.

Its also easier this was as i don't have to reconfigure the storage switches and when we do a DR we can just bring up the two datastores and start up the vm and its ready to go. No need to try and get a connection at the os level working.

But you are saying that i couldn't even use nfs client from windows to connect to the netapp directly? I thought that was possible?

The other option is to use a cif share for the file store and i think taht is what the vmware guy was suggesting. But then i would have to connect the main switch stacks to the netapp and force the OS store and one exchange database through the same 1 gigabit nfs interface as the one controller has both an exchange datastore and a vm OS datastore. Their old configuration they only had 1 gigabit nfs for exchange and vmware os store and when you uploaded something to the datastore like an iso it would freeze the exchange mailboxes. This is what i wanted to get away from and another factor is the document management store does not realy need 2x1gigabit as its just used for retrieving documents. The way the dual controller works for HA you haev to match up the interfaces on both controllers. So i would have to have 2xgigabit from each controller connected to the main switch stacks, if i want redundancy.
 
Last edited:
This way we can have a storage switch fail and a controller fail and all the datastores remain active.

What i could have done is not use interface redundancy then i would have had 4x 1gigabit per controller. Cause at the moment 2x1gigabit per controller is not in use.

I am not sure what you mean, i do see each interface on the controller as extra bandwidth. That is why i set it up so that exchange uses 2gigabit nfs. 1gigabit per controller. Exchange is super fast now, compared to what it used to be just over 1gigabit nfs shared with vmware OS store.
 
Last edited:
Storage switches are two switches stacked and isolated from the main stacks.

Storage switches are 2x2960 and the main stacks are 6x3750, 5x3750 and 2x3750 (3 stacks)

They had to be stacked otherwise the NFS heart beat would not work when using redundancy. On the netapp i combine two interfaces in to one and create a virtual interface out of them. Then i send one nic to each switch and then on each esxi host i have one port group for nfs-general that has two nic, one going in to each switch. Then i have another two nic called nfs-exchange that has two nic one going in to each switch. This allows for switch and controller redundancy. But downside is one of the nfs paths sits unused until even of a failure.
 
Last edited:
Its already in place and not overly complicated. Its just difficult to explain.

That is essentially what i have done except we have [1a 1b] [1c 1d], [2a 2b] [2c 2d].

[ ] indicates a vif. 1a goes to switch one and 1c goes to switch one, 1b goes to switch 2 and 1d goes to switch 2.

Then the interfaces on controller 1 has to match up with up with controller 2 so that the controllers has HA support. Which means that we are left with 4 interfaces on the netapp over both controllers. Each interface has two paths.

We had to stack the switches or it would not work because the nfs requires heart beat over the non active paths.

On the netapp traffic monitoring, two of the interfaces per controller are in use, say 1mbyte sec constant traffic and the other two interfaces on the controller has 0.1-0.3mbyte sec traffic. Which is just heartbeat traffic.

On the esxi datastore side, each volume is presented on to its own VIF. If we wanted more volumes then i would just end up connecting to those volumes with the already in use VIFs. The reason i don't just use one vif for exchange is because the volumes are on different controllers.

xpTuwfP.png


This is just one of many ways to configure the netapp. The other way is to have one controller in passive mode and have one big aggregate instead of splitting the disks per controller. In hindsight i probably would have done that configuration because it allows for spares to be allocated across all the disks. But this way we don't have a controller doing nothing.
 
The main reason we were forced to go down storage switch route is because the existing switch stacks are 100mbit except 1 switch in each stack, which at the time of starting the project was completely full. Now that we have decommissioned the old netapp and esx and exchange 2003 environments the gigabit switches have free'ed up.

Ok thanks for the NFS info. In that case only options are CIFS or existing configuration.
 
Last edited:
I don't see the point of presenting your various NFS volumes on separate IP addresses and different interfaces.

I'm guessing 10.20.2.x is one netapp controller (I'll call it 01) and 10.20.3.x is the other (I'll call it 01b). You said these are in a HA pair which is great.

I'd present all volumes on a controller on a single IP address. I'd also have both controllers on the same subnet for simplicity too.

I'd create 2 four port trunks on your switch stack. Trunk 1 would be 2 ports on member 1 and 2 ports on member 2. Trunk 2 would be the same, different ports obviously.

Netapp01 goes in trunk 1, 01b in trunk 2.

This config gives you maximum bandwidth, is easily managed and makes your ESXI config easier because you've only 1 trunk to your storage to set up, and adding another data store for VMWare is just a matter of creating the LUN in OnCommand, exporting it and adding it in VSphere. You retain switch redundancy and controller redundancy.

Your setup is more complicated than it needs to be because unless I'm missing something, you've got no more resilience, less bandwidth, and more admin overhead than the much simpler config I've suggested.

So you would create one vif per controller, combining 4 interfaces in to one? It seemed to me that the disks were faster than the nfs protocol. When i run benchmarks running iscsi mpio you would get double the disk performance, but when running nfs it would seem capped. So to me it made sense at using more than one nfs connection per controller, if nothing else improve latency and throughput for that traffic. I can't realy foresee any more volumes that need creating as its a small site. So 4 volumes and 4 interfaces seems better than 4 volumes 2 interfaces. The esx does not support lacp so if you combine 4 interfaces in to one you are only going to get 1 gigabit as the esx is limited to 1 gigabit.
 
Back
Top Bottom