Storage question

Permabanned
Joined
28 Dec 2009
Posts
13,052
Location
london
What would you do when you need to present volumes to windows guest virtual machines for use for exchange and for use as a document file server.

a) connect the esxi host to the FSA with nfs and create a vmdk disk on the datastore and add it as a drive to the virtual machine.

b) install nfs in the windows virtual machine and connect directly to the san through the storage switches?
 
Netapp FAS2240-2, pair of stacked cisco 2960 for storage switches.

They both have positive and negatives, the data is backed up to tape via windows agents every day that go off site. Then we will also be replicating the netapp offsite with snap mirror. The exchange will be replicated with DAG, second exchange server at offsite location.

At the moment the netapp is configured with nfs to the 3xesxi hosts but the question is whether i should add virtual machine port group and then use windows nfs to connect to the netapp or if i should add a drive via esxi nfs.
 
Last edited:
I'm guessing FSA means FAS means NetApp.

In which case why would you want a file server when you already have a box that can domain join and share out volumes in a Windows-friendly way?
 
Last edited:
The main reason is that the netapp is currently not connected to the main switch stacks, the management port is but not the actual data interfaces, only through storage switches and then to the esxi hosts.

I could add another vm port group and then go to the san through the storage switches, ie add a virtual nic to the virtual machine that connects to storage switches and then access the volume through nfs.

Having the netapp as a stand a lone would mean i would have to add the netapp data ports in to the main switch stack as users will have to be able to access the data, which they would do through the network of the server i am attaching it to.
 
It's just a bit of a strange way to set it up really, sort of negates the point of having a NAS unit if you're just going to use it as SAN. You've got 4 GbE ports per controller assuming you haven't got the 10GbE mezz cards, why not have 2 go to your production network and 2 sit in your VM storage network?
 
I think the decision is whether do do things as a virtual disk or a direct connection to a share, I ignored the NFS part for reasons you gave. The answer depends on what backup software you're using and whether the application software you want to use is happy to put a datastore on an SMB share.

I wouldn't be shoehorning NFS into Windows.
 
NFS volume on the Netapp presented to ESXI as a data store would be the way I'd go based on what you've said.

Is your FAS fitted with the HA dual controllers?

Forget b, and I wouldn't be using SMB/CIFS for Exchange. We use CIFS for user drives, team drives and our EDMS system which holds about 4 million files and it's fine for that, but CIFS performance tends to drop off quickly under heavy load. You ideally want CIFS on its own controller - we use NFS and iSCSI on one controller and CIFS on the other but they're HA paired so if one dies the other will take over.

You could look also at raw device mappings in ESXI using iSCSI from the Netapp, this may be preferable if you've got or are looking at SnapManager for Exchange.
 
I currently have exchange on nfs datastores that connects to the netapp via the esxi hosts. There is dual controllers the way i have it set up is that its fully redundant so we essentially only have 2 gigabit per controller to work with as the other two is for redundancy. I have two exchange databases, each exchange database sits on its own volume on its own controller this allows for 1gigabit nfs per database. I know its doesnt have to be set up this way, but exchange is their primary application so i have dedicated 2x1gigabit for exchange alone. Then i have 1gigabit on one controller for vmware OS datastore. Then the other gigabit connection i will use for the document management store. I have just gone ahead with adding the drive to a windows server on a nfs datastore connected to the netapp through storage switches. This way the storage traffic is isolated from the main stacks and the network traffic is through different interfaces. This should as far as i can maximise performance. My biggest issue with doing it this way was that i don't like the idea of putting millions of files and 800gb inside a vmdk disk. I spoke to some vmware experts on freenode who i speak to a lot and he strongly suggested putting it directly on the netapp for various reasons which turned out were not directly relevant to my environment, but still valid non the less.

Its also easier this was as i don't have to reconfigure the storage switches and when we do a DR we can just bring up the two datastores and start up the vm and its ready to go. No need to try and get a connection at the os level working.

But you are saying that i couldn't even use nfs client from windows to connect to the netapp directly? I thought that was possible?

The other option is to use a cif share for the file store and i think taht is what the vmware guy was suggesting. But then i would have to connect the main switch stacks to the netapp and force the OS store and one exchange database through the same 1 gigabit nfs interface as the one controller has both an exchange datastore and a vm OS datastore. Their old configuration they only had 1 gigabit nfs for exchange and vmware os store and when you uploaded something to the datastore like an iso it would freeze the exchange mailboxes. This is what i wanted to get away from and another factor is the document management store does not realy need 2x1gigabit as its just used for retrieving documents. The way the dual controller works for HA you haev to match up the interfaces on both controllers. So i would have to have 2xgigabit from each controller connected to the main switch stacks, if i want redundancy.
 
Last edited:
I wouldn't look at redundancy in that way at all, there's no reason not see multiple paths to a controller as extra bandwidth.
 
This way we can have a storage switch fail and a controller fail and all the datastores remain active.

What i could have done is not use interface redundancy then i would have had 4x 1gigabit per controller. Cause at the moment 2x1gigabit per controller is not in use.

I am not sure what you mean, i do see each interface on the controller as extra bandwidth. That is why i set it up so that exchange uses 2gigabit nfs. 1gigabit per controller. Exchange is super fast now, compared to what it used to be just over 1gigabit nfs shared with vmware OS store.
 
Last edited:
Storage switches are two switches stacked and isolated from the main stacks.

Storage switches are 2x2960 and the main stacks are 6x3750, 5x3750 and 2x3750 (3 stacks)

They had to be stacked otherwise the NFS heart beat would not work when using redundancy. On the netapp i combine two interfaces in to one and create a virtual interface out of them. Then i send one nic to each switch and then on each esxi host i have one port group for nfs-general that has two nic, one going in to each switch. Then i have another two nic called nfs-exchange that has two nic one going in to each switch. This allows for switch and controller redundancy. But downside is one of the nfs paths sits unused until even of a failure.
 
Last edited:
I've never heard of redundant switches running as a stack before. How I've always done things is take for example a dual-controller (1 & 2) system with 2 NICs (A & B) on each controller. Ports are 1A 1B 2A and 2B.

One switch has 1A and 2A connected to it. The other switch has 1B and 2B connected. Each host has one connection into each switch. This lets you lose one switch and one controller before you lose access to anything (albeit at a degraded performance).

But as this is a NAS and not a SAN I think you're making things complicated for yourself.
 
Its already in place and not overly complicated. Its just difficult to explain.

That is essentially what i have done except we have [1a 1b] [1c 1d], [2a 2b] [2c 2d].

[ ] indicates a vif. 1a goes to switch one and 1c goes to switch one, 1b goes to switch 2 and 1d goes to switch 2.

Then the interfaces on controller 1 has to match up with up with controller 2 so that the controllers has HA support. Which means that we are left with 4 interfaces on the netapp over both controllers. Each interface has two paths.

We had to stack the switches or it would not work because the nfs requires heart beat over the non active paths.

On the netapp traffic monitoring, two of the interfaces per controller are in use, say 1mbyte sec constant traffic and the other two interfaces on the controller has 0.1-0.3mbyte sec traffic. Which is just heartbeat traffic.

On the esxi datastore side, each volume is presented on to its own VIF. If we wanted more volumes then i would just end up connecting to those volumes with the already in use VIFs. The reason i don't just use one vif for exchange is because the volumes are on different controllers.

xpTuwfP.png


This is just one of many ways to configure the netapp. The other way is to have one controller in passive mode and have one big aggregate instead of splitting the disks per controller. In hindsight i probably would have done that configuration because it allows for spares to be allocated across all the disks. But this way we don't have a controller doing nothing.
 
There is no way that running NFS inside a Windows VM is a good idea or supported.

Given your two choices I would mount up a new VMDK.

@Caged, using a stack like this is actually quite a nice solution to the original problem. He gets to EtherChannel down to the NetApp, gets pretty reasonable switch redundancy (because a stack member can fail without nuking the whole stack, although he can't reboot the stack etc) and has a single point of management.

He only has one link's worth of bandwidth because the load balancing is based on source/dest and he only has one source/destination pair per ESX host so there is nothing to load balance...

I personally think it is ludicrous to have a dedicated storage switch for this setup but you could easily add a VLAN trunk to the DMS VIF, put the CIFS shares on that interface and then trunk that VLAN over to the LAN switches.
 
The main reason we were forced to go down storage switch route is because the existing switch stacks are 100mbit except 1 switch in each stack, which at the time of starting the project was completely full. Now that we have decommissioned the old netapp and esx and exchange 2003 environments the gigabit switches have free'ed up.

Ok thanks for the NFS info. In that case only options are CIFS or existing configuration.
 
Last edited:
Back
Top Bottom