Storage Spaces Direct Networking question

Soldato
Joined
30 Sep 2005
Posts
16,736
Quick question for those who know this technology

2 node cluster, each with dual 100gb mellonex network cards (RDMA compliant) directly connected to each other. Server1 Nic1 > Server2 Nic1 & Server1 Nic2 > Server2 Nic2

Is it good practice to connect both nics to a virtual SET switch and carry both storage and live migration traffic?

I think it is, but wondering if there's anything else perhaps I haven't thought about

There will be a second network card (quad intel 10gb connected to a physical switch) so my plans for that would be a second virtual switch for lan access and heartbeat
 
Last edited:
using a SET switch is the way to go.

create the SET switch and then add the VLANs to it as vnets afterwards.
then you can just define which VLAN each VM is on via the VM settings.
you can also define QoS settings for the SAN traffic.

Here is the rough PS I use for a 2 node S2D

New-VMSwitch -Name SETSwitch -AllowManagementOS $True -NetAdapterName 25G-1,25G-2 -EnableEmbeddedTeaming $True

add-VMNetworkAdapter -ManagementOS -switchname SETSwitch -name VLAN-Native
add-VMNetworkAdapter -ManagementOS -switchname SETSwitch -name VLAN-11
add-VMNetworkAdapter -ManagementOS -switchname SETSwitch -name VLAN-12
add-VMNetworkAdapter -ManagementOS -switchname SETSwitch -name LiveMigration-201
add-VMNetworkAdapter -ManagementOS -switchname SETSwitch -name SANFabric1-202
add-VMNetworkAdapter -ManagementOS -switchname SETSwitch -name SANFabric2-202

$Nic = Get-VMNetworkAdapter -Name *VLAN-Native -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 0

$Nic = Get-VMNetworkAdapter -Name VLAN-11 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 11

$Nic = Get-VMNetworkAdapter -Name VLAN-12 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 12

$Nic = Get-VMNetworkAdapter -Name LiveMigration-201 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 201

$Nic = Get-VMNetworkAdapter -Name SANFabric1-202 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 202

$Nic = Get-VMNetworkAdapter -Name SANFabric2-202 -ManagementOS
Set-VMNetworkAdapterVlan -VMNetworkAdapter $Nic -Access -VlanId 202


Get-NetAdapterRDMA -Name *SANFabric* | Enable-NetAdapterRDMA
Get-NetAdapterRDMA -Name *LiveMigration* | Enable-NetAdapterRDMA

Install-WindowsFeature Data-Center-Bridging
New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3
Enable-NetQosFlowControl -Priority 3
Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7
Enable-NetAdapterQos -InterfaceAlias "25G-1"
Enable-NetAdapterQos -InterfaceAlias "25G-2"
New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 50 -Algorithm ETS

Set-VMNetworkAdapterTeamMapping –VMNetworkAdapterName SAnFabric1-202 –ManagementOS –PhysicalNetAdapterName 25G-1
Set-VMNetworkAdapterTeamMapping –VMNetworkAdapterName SANFabric2-202 –ManagementOS –PhysicalNetAdapterName 25G-2

New-NetIPAddress -InterfaceAlias "vEthernet (VLAN-Native)" -IPAddress 172.16.X.X -PrefixLength 24 -Type Unicast
Set-DnsClientServerAddress -InterfaceAlias "vEthernet (VLAN-Native)" -ServerAddresses 172.16.X.X

New-NetIPAddress -InterfaceAlias "vEthernet (VLAN-11)" -IPAddress 172.16.11.X -PrefixLength 24 -Type Unicast
New-NetIPAddress -InterfaceAlias "vEthernet (VLAN-12)" -IPAddress 172.16.12.X -PrefixLength 24 -Type Unicast
New-NetIPAddress -InterfaceAlias "vEthernet (LiveMigration-201)" -IPAddress 10.0.0.10 -PrefixLength 24 -Type Unicast
New-NetIPAddress -InterfaceAlias "vEthernet (SANFabric1-202)" -IPAddress 10.0.10.10 -PrefixLength 24 -Type Unicast
New-NetIPAddress -InterfaceAlias "vEthernet (SANFabric2-202)" -IPAddress 10.0.10.11 -PrefixLength 24 -Type Unicast

hope this helps
 
Thanks!!

It suddenly dawned on me that since my storage is going over the mellonex cards directly attached (no physical switch involved), I don't need to do anything on the virtual side. The only other traffic I'm mixing will be migration traffic, but I'm cool with that to be honest as I'll have 2 x 100gb connections. I'll still have RDMA which is important to me.

Just need one SET switch for the other stuff and I'm sorted
 
Back
Top Bottom