Permabanned
- Joined
- 28 Dec 2009
- Posts
- 13,052
- Location
- london
The netapp 2240 and esxi 5.1 hosts are all set up.
The redundancy is fully working. I can turn off a storage switch and do a netapp controller takeover and everything continues too work ok.
There is still one outstanding issue that we can't find any help on. Im probably going to call netapp about it and see what they say. But wondering if anyone has any insight.
The old exchange server on old infrastructure uses iscsi through windows to connect to the old netapp. So we have to replicate this setup on the new kit. I have setup iscsi with two nics on a vswitch. Each nic goes in to its own storage switch. Each netapp controller has two gigabit nic one going in to each switch dedicated solely to iscsi. Iscsi is on its own vlan and jumbo frames is enabled. I set up mpio in windows to work with multiple paths and it works perfectly. As i said can turn off switches or do a takeover and the iscsi storage stays connected without any issues.
The issues is latency, when we run a icmp from the windows guests to the iscsi ips on the netapp. We get a 1ms most of the time but we get a lot of high latency spikes, 30-70. The old netapp was a solid 1 or 2ms the whole time. This has us very concerned. Not sure what could be causing it as we have short cables going from the esx host to storage switches (cisco 2960) and from there in to the netapp, its completely isolated.
Any ideas what we could try?
I did an atto benchmark on an disk added through nfs.
We also did a jetstress on the iscsi volumes and doing other tests. Any comments on the atto benchmark? it almost looks like its being limited by 1gigabit, but it has 2gigabit for nfs so i am not sure why its stopping at 1, probably cause its multipath and not lacp 2gigabit?
For the nfs we have 2xgb nic on each controller combined in to a vif and on its own vlan. Then we have one port group going to two nic, each nic goes to its own switch.
The redundancy is fully working. I can turn off a storage switch and do a netapp controller takeover and everything continues too work ok.

There is still one outstanding issue that we can't find any help on. Im probably going to call netapp about it and see what they say. But wondering if anyone has any insight.
The old exchange server on old infrastructure uses iscsi through windows to connect to the old netapp. So we have to replicate this setup on the new kit. I have setup iscsi with two nics on a vswitch. Each nic goes in to its own storage switch. Each netapp controller has two gigabit nic one going in to each switch dedicated solely to iscsi. Iscsi is on its own vlan and jumbo frames is enabled. I set up mpio in windows to work with multiple paths and it works perfectly. As i said can turn off switches or do a takeover and the iscsi storage stays connected without any issues.
The issues is latency, when we run a icmp from the windows guests to the iscsi ips on the netapp. We get a 1ms most of the time but we get a lot of high latency spikes, 30-70. The old netapp was a solid 1 or 2ms the whole time. This has us very concerned. Not sure what could be causing it as we have short cables going from the esx host to storage switches (cisco 2960) and from there in to the netapp, its completely isolated.
Any ideas what we could try?
I did an atto benchmark on an disk added through nfs.
We also did a jetstress on the iscsi volumes and doing other tests. Any comments on the atto benchmark? it almost looks like its being limited by 1gigabit, but it has 2gigabit for nfs so i am not sure why its stopping at 1, probably cause its multipath and not lacp 2gigabit?
For the nfs we have 2xgb nic on each controller combined in to a vif and on its own vlan. Then we have one port group going to two nic, each nic goes to its own switch.
Last edited: