iSCSI & Network Traffic on Cisco 3750s?

Associate
Joined
30 May 2008
Posts
74
Location
London, UK
Hi Guys,

What are the pros and cons of running iSCSI & Network Traffic on the same switches?

We are planning a new build with 2 x 2 switch 3750 stacks in each site (2 sites). One stack is for network traffic and the other for iSCSI.

We would like to find out if it would be possible to run both networks off a single stack at each site this would drop the total number of switches we need from 8 to 4, quite a cost saving!

We won't have a lot of devices, at first it will be 2 ESX servers and an EqualLogic SAN in each site and the switches will be under-utilised.

So if we set the MTU of the switches to 9000 and run both iSCSI and Network traffic on separate VLANs, would this have a negative impact to performance?

Thanks in advance!
 
I hope you meant 3750-X switches.
Make sure you are getting the correct level of IOS for the functionality you need.
The top level of IOS are expensive, compared to Extreme and Juniper's similar offerings.

Ideally you want redundant network paths to your network storage.
This entails having two connections across two separate networks logically and two switches physically.
So one host will connect to the storage network twice (into each switch).

I can't honestly say I've ever implemented what you're suggesting as there tends to be a sufficient budget for extra switching capacity as part of the project.

Aside from functionality and scalability (And potentially interoperability with older equipment not liking Jumbo Frames), is the price of the switches so high that you would compromise speed, storage corruption, and security?
 
Last edited:
We have both iSCSI and core traffic going across a single stack of 3 3750s at the moment, and the network overheads are ridiculous. It also results in some quite sporadic pings at times.

We're in the process of moving this to two stacks of 3750-X, one stack for core, one for iSCSI.
 
Thanks for the input guys.

We are looking at using 3750E-24TDs with the standard IP base, we won't be using the switches for any advanced L3.

I am trying to convince the client to run two separate stacks, but ultimately it is up to them. (We originally recommended FC, but they want to stick with iSCSI)

Out of curiosity would you rather run each iSCSI switch as a standalone and physically connect each server and storage to each switch, or would you prefer to run the iSCSI switches as a stack and utilise LACP between the SAN and Servers (each port in same port-channel, but seperate physical switch).

Thanks again.
 
We have some clients that use a unified core of 3750s for their iSCSI on another VLAN. This works fine in small volumes but I wouldn't recommend it. Internally (i.e. my company) any iSCSI traffic goes over separate fabric

- GP
 
I am trying to convince the client to run two separate stacks, but ultimately it is up to them. (We originally recommended FC, but they want to stick with iSCSI)
It's quite simple. It won't work properly doing it the other way, and they'll hate it as a result. There's not much to discuss. :)

volkan said:
Out of curiosity would you rather run each iSCSI switch as a standalone and physically connect each server and storage to each switch, or would you prefer to run the iSCSI switches as a stack and utilise LACP between the SAN and Servers (each port in same port-channel, but seperate physical switch).

Thanks again.
LACP does not improve a single data flow, MPIO does. You'll be maxing out one of your gig connections while the other does nothing.
 
Back
Top Bottom