Its already in place and not overly complicated. Its just difficult to explain.
That is essentially what i have done except we have [1a 1b] [1c 1d], [2a 2b] [2c 2d].
[ ] indicates a vif. 1a goes to switch one and 1c goes to switch one, 1b goes to switch 2 and 1d goes to switch 2.
Then the interfaces on controller 1 has to match up with up with controller 2 so that the controllers has HA support. Which means that we are left with 4 interfaces on the netapp over both controllers. Each interface has two paths.
We had to stack the switches or it would not work because the nfs requires heart beat over the non active paths.
On the netapp traffic monitoring, two of the interfaces per controller are in use, say 1mbyte sec constant traffic and the other two interfaces on the controller has 0.1-0.3mbyte sec traffic. Which is just heartbeat traffic.
On the esxi datastore side, each volume is presented on to its own VIF. If we wanted more volumes then i would just end up connecting to those volumes with the already in use VIFs. The reason i don't just use one vif for exchange is because the volumes are on different controllers.
This is just one of many ways to configure the netapp. The other way is to have one controller in passive mode and have one big aggregate instead of splitting the disks per controller. In hindsight i probably would have done that configuration because it allows for spares to be allocated across all the disks. But this way we don't have a controller doing nothing.