Stupid question time

Soldato
Joined
7 Jun 2003
Posts
16,147
Location
Gloucestershire
Got my shiney new Nimble CS215 today and I'm LOVING IT!

Anyway to the stupid question:
I've got 2 x Cisco SG500XG-8F8T switches in a stack for my iSCSI traffic over 10GbE, and one 1GB link back to the router (8206zl) from the top switch of the stack

The stupid question:
Do i need to connect a cable from the 1Gb port of each switch back to the 8206? I'm assuming yes, as otherwise there'd be no redundancy in terms of management if one switch failed, but I've never used a stack before, so i don't want to go creating any silly loops.
 
Run one management vlan cable from each switch in the stack, and run them in a port-channel if possible, no idea if your router can cope with two interfaces in the same vlan?
 
This is pretty much my first time using any cisco kit, I've always previously used HP procurves so when you say a port-channel, do you mean what i would understand to be an lacp trunk? (Trying to understand what googles giving me on that front)
 
Yes, essentially a Cisco "Etherchannel" is an LACP (or PAgP depending on the hardware) trunk.

Am I right in thinking that the 8206 is a single point of failure however? Seems odd having your iSCSI switching plugged straight into your router, as opposed to some core layer (or distribution layer) switching (which would also be stacked, or at least redundant).

Back to your first question though, you don't technically need to do anything clever with the two management links, assuming you've not disabled spanning tree.
 
thanks i set them up as an LACP trunk earlier, worked a treat.

The 8206 is indeed a single point of failure, but all the VM hosts and iSCSI storage is also connected to those same Cisco switches. The iSCSI network has redundancy from host NIC through to storage array, that's my main point of concern really. Unfortunately it's not economically viable (aka we can't afford it!) to have redundancy to the 8206zl. Though it would be nice.

The 8206 failing is the only single component that could fail and take down the whole network as far as im aware, it has redundancy in terms of PSU, and i do plan on putting a second management module in next year (no money to do it this year) to give it as much redundancy for minimal cost as i can. Management are made aware of the fact that if this failed, there would be ~24hrs downtime, they're willing to accept the risk. It's a school after all so no major financial damage done like a business would have, and i take regular config backups to be safe :)
(...I'm not willing to accept the risk in terms of damage it can do if the iSCSI goes down with the VMs still running though as you can imagine :p)
 
Last edited:
If its just management then i would leave one cable. I dont even run a management cable to the storage switches where i work as i consider them isolated. It would probably make managment easier but i rarely make changes to storage switchss. At the moment i have to go down and run a cable from a laptop to make changes. The way its configured is that i never need to make changes to it. Unless we get a new netapp shelf or add another esxi host. The switches are at the back of the rack and the main stacks are the other side near the patch panels. Unless your storage requires access to the main stack then it wouldnt be managment only anymore. In that case it would be better to run a fiber between the switch stacks including the storage swiche stack in that. But i am no network expert so wait for more input. unless your storage is serving data directly to the main stacks then i would leave the storage switches isolated.
 
Last edited:
If its just management then i would leave one cable. I dont even run a management cable to the storage switches where i work as i consider them isolated.

We do, but console cable from the two clustered management hypervisors (the management hypervisors have local storage only, they run the "out of system center" Domain Controller, and the System Center VMs). At least it means I can remote in to the hypervisors and access the switches should I need to (never have since commissioning).

No ethernet between iSCSI and the rest of the network here, certainly no need for redirected access through the Core network (although having said that, assuming one of the hosts maintains iSCSI access, Hyper-V/Windows Failover Clustering can redirect over core, through the "up" hypervisor and access storage that way). It would be highly unlikely for all 24 NICS (4 hosts, 6 NICS), 24 cables, 3 PSUs (3 Switches) and the Cisco RPS to drop at the same time :p
 
Last edited:
I've never thought of removing them to be honest...although i've only just installed these new ones so I'm still fine tuning things...and learning some cisco terminoligy as i go :p

Once everything is setup by the end of this month i should be able to *fingers crossed* run everything for the next 5 years with minimal need to change anything, so maybe i'll get rid of the management side of things then and have them run solo
 
Back
Top Bottom