Storage Switch Suggestion

Permabanned
Joined
28 Dec 2009
Posts
13,052
Location
london
Ok ill give more information.

Currently at one client they have 8x WS-C3750-24PS-S in two stacks which is EOL, it has a replacement part number of WS-C3750V2-24PS-S. The current switches have a 18gigabit switching fabric across each stack and the new one has a 32gigabit switching fabric across the stack as well as full gigabit ports. The prices range from £2300-3000 per 24 port switch. I understand that we can’t be replacing all the switches at this point, but we have to make a purchase with current switches and future upgrades in mind. The future switches will have to have POE for the voip phones, we will need to find out how much watts is required for all the phones when we do replace the switching.

If we just purchase some cisco gigabit switches we will have to consider how these are going to connect to the current stack. Ideally we would replace the entire stack and plug the netapp and esxi in to two switches (with nothing else in them) on the stack to take advantage of the stacked switch fabric bandwidth. But I think for storage switches we could just use 2x Cisco Catalyst 2960S-24TS-L (£1100 inc vat) and create a new stack and connect it to the current setup. But I am not sure how well that will perform and if we will find that when we do finally get around to upgrading all the switches that we are stuck with 2960s that don’t have poe and don’t stack with a 3750 due to different stacking technology.

It might be better than to just buy two WS-C3750V2-24PS-S for the new netapp 2240 and hp gen8 esxi using iscsi? then we do get a budget for upgrading all the switches the esxi switches will stack with the rest of the switches.

any comments ?
 
Last edited:
I've not tried it myself, but someone here has tried running iSCSI over 3750s and hit switchport buffer issues. Are you running iSCSI or using NFS? I'd not run iSCSI at all myself tbh but I can see your logic in what you're trying to do with the switches.

I could type a load of things down here but I'll start by asking what your budget is?
 
We currently use a combination of iscsi and NFS on our netapp 2050 (i think its a 2050) and that goes in to the current 3750 stacks and over the years previous IT admins have plugged other network cables in to the what was once dedicated storage switches in the stack, thinking that that they were just empty ports. The budget I would say is probably £1000-2000 per each storage switch. But if we have to replace all the switches then we will have to go with 3750s so that we don't run in to upgrade problems in the future, the budget would then increase to the £3000.

Basically my boss has asked me to spec out some storage switches for a netapp 2240 and 3x hp gen8. He didn't want to go with FC because he wants to keep the cost down.

The reason we use a combination is because the esx hosts are still on v3.5 and vfs3 and as such exchange is on nfs because at that time 2008 it was not recommended to run exchange on vfs3.
 
Last edited:
I've not tried it myself, but someone here has tried running iSCSI over 3750s and hit switchport buffer issues. Are you running iSCSI or using NFS? I'd not run iSCSI at all myself tbh but I can see your logic in what you're trying to do with the switches.

I could type a load of things down here but I'll start by asking what your budget is?

That's me.

We have noticed this on pretty much all 3750 switches we have used now. It can be resolved with about 5/6 lines of config which effectively re allocates the buffers. However, the simple way to fix it is to ether buy a 3850 switch as they have a large buffer, so iSCSI traffic is handled a lot better or I would highly recommend looking / reviewing Force 10 switches as they have much better buffers and are designed for storage use.

Also the NetApp 2050, I am pretty sure EOL soon or now so support costs on that will go through the roof!

Also if you are specing out a new filer plus nodes, it makes sense to go with the FC route as its complete solution and it will save a lot of hassle later on. Also it's cheaper to buy the NetApp fully loaded new than add licences later on as they can make your eyes water at the price.
 
Last edited:
[RXP]Andy;24222753 said:
Also the NetApp 2050, I am pretty sure EOL soon or now so support costs on that will go through the roof!

Also if you are specing out a new filer plus nodes, it makes sense to go with the FC route as its complete solution and it will save a lot of hassle later on. Also it's cheaper to buy the NetApp fully loaded new than add licences later on as they can make your eyes water at the price.

That is correct, netapp quoted £14k i believe and its up for renewal soon so we are going to with a netapp 2240 instead and doing a buy back. Considering the switch situation and what you have just said about FC, I think going fc will be best route even though it might cost more.
 
That is correct, netapp quoted £14k i believe and its up for renewal soon so we are going to with a netapp 2240 instead and doing a buy back. Considering the switch situation and what you have just said about FC, I think going fc will be best route even though it might cost more.

Does that come with PSE?

TBH. When you have worked out the costs of going for a FC fabric layer Vs a correctly setup iSCSI network there isn't a huge amount in it to be honest. Also by a correct iSCSI setup I mean using the correct switches and HBA's NOT software initiators etc...

Also one other thing people seem to forget about, is the engineering costs to the company. Yes, the iSCSI solution may seem cheap on paper from the outset. However, when something isn't working correctly and needs to be investigated the cost saving on iSCSI quickly gets lost in engineering time. I also speak from a painful experience on this subject.

FC is a protocol designed for storage from the ground up, iSCSI piggy backs on TCP/IP which means overhead and a depending on the equipment used a performance impact.

What I will say is a well designed and thought-out storage network will always be a reliable!
 
Last edited:
Not sure what to say here as there are lots of comments being thrown around which aren't strictly accurate.
Let me just throw out there that I am a storage consultant that has worked for several storage vendors for quite a while now.
Iscsi/FC/cifs/Nfs/IB are all just protocols and ultimately it comes down to what you want to achieve in your environment. Iscsi is fine for 99.9% of environments, it's only "weakness" is its not as secure as FC. Statements like iscsi performs worse than FC, is more complicated, requires very high end switches etc simply aren't true. You don't need a iscsi hba and software initiators are fine, in fact software initiators work better than hbas (always use intel nics as their drivers are mature).
To answer your original question, I have found that 3750s are very good iscsi switches. They may drop a couple of packets but that is only when you are doing very large data transfers so if that's what you are doing then look at more higher end switches but of course that means more cost. Based on your netapp model, these aren't high end so I guess it's not a high performance environment. Storage networks do thousands of small transfers and thats what makes storage fast, not large sequential streams of data, unless you are a video hosting company, in that case you want to look at dot hill.
If you want screaming fast storage performance, you should be looking at the new generation of hybrid storage vendors instead of the usual net app/emc configs. They are faster, cheaper and you get better support (with the right vendor). Check out the gartner magic quadrant to see who they are.
Conclusion, don't skimp on iscsi switches but don't be fooled into thinking you need to go FC or buy Very expensive switches. Remember with FC, you need dual hbas per host, ha/failover software, FC switches and FC skills which all costs more than iscsi solutions.

Hope this helps
 
Conclusion, don't skimp on iscsi switches but don't be fooled into thinking you need to go FC or buy Very expensive switches. Remember with FC, you need dual hbas per host, ha/failover software, FC switches and FC skills which all costs more than iscsi solutions.

Hope this helps

You don't need dual HBAs with FC any more than you need dual NICs with iSCSI. What a totally bizarre thing to suggest. The underlying hardware architecture is the same regardless of if the switches are ethernet or something else. FC skills are different to iSCSI but not amazingly so, initiator groups vs zoning and so on.

As for iSCSI's only weakness being security? How about the fact that classical ethernet is inherently lossy (as is TCP/IP) and SCSI depends on an inherently lossless transport mechanism?

There is at least one poster in this thread posting about real world experience with iSCSI over Cisco 3750 and having issues. I can believe that because it is an edge switch and not a storage switch!

Switch decisions aside, I would suggest that you should try and keep the client traffic on different switches to the storage traffic. If nothing else it will help stop outages when people come to patch/repatch client PCs and get it wrong. If they know the server stuff is all on a different switch they shouldn't touch it.

Obviously enough this is quite a small scale thing and most of us are posting up experiences and best practices for much larger scale operations but the basic rules still apply if you don't want to have poor performance and lots of troubleshooting.
 
it was not my intent to mislead re dual HBAs/NICs but i assumed the OP would be building a redundant design and that always calls for dual NICs or HBAs. If only a single path is required then absoutely, dual cards are not needed. I did mention it though to illustrate the cost difference between FC HBAs and NICs for iscsi.

Not sure I agree on what the point is re lossy/lossless connectivity. iSCSI as a standard sorts this out through the protocol so that should packet loss occur, the initiator or target retransmits the packet. In a terrible network where packet loss is happening all the time, this causes a huge impact but ive yet to come across one. Usually, most people seperate data from standard lan traffic and keep the SAN isolated. To further prove the point that this isnt an issue, i'm sure you will be aware that all the major FC vendors are pushing CN products (converged networking) and guess what they all run over? Ethernet. FCoE etc.
Ethernet is getting faster and faster all the time and it wont be long before 60Gigabit or possibly 100Gigabit ethernet is commonplace. IB could be an option but its horrendously expensive and the cables are insane so i think ethernet will win!
From a simplicity point of view, lun masking/zoning vs initiator groups isnt that complex but its when you want to expand things, people are inherently more familiar with ethernet so when it comes to things like VLANs, ISL, stacking etc the knowledge is there internally usually. Not so the case with FC unless its a medium sized+ organisation.

I really think the OP would be wasting money going FC with their environment when they havent experienced any issues with iSCSI. Its also not my intent to have a FC vs iSCSI argument at all. Each has its role to play in storage networking.

Please feel free to point me at the forum post where someone was having issues with iscsi as i'd be happy to help troubleshoot whats going on.

Cheers
 
Bit disingenuous to suggest that FCoE runs over classical ethernet, which of course it does not. It uses DCBX to add in a pause frame to ensure end to end delivery of the FC packets (as required by the FC specification). Lossless - unlike Ethernet. That is absolutely not the same as using TCP for retransmits. Packet loss is present in every single network that allows it by design.

The fact ethernet is getting faster and faster all the time is actually an argument against iSCSI - the faster the link gets, the bigger the TCP window size and thus the more retransmits will need to happen every time an ACK is missed. Decreasing the window size hammers the transfer rates, giving you wasted investment.

I run a few FCoE converged networks globally and there's not a thing in this world right now that would get me to use iSCSI in its place. iSCSI won't even displace FC4 let alone FC8.

As for the OP, I agree that an investment in FC is probably not going to happen (but it is what he should do). Failing that, he should consider the posts in here indicating that the switches he is looking at are probably not suitable for the job.
 
Ok ill give more information.

Currently at one client they have 8x WS-C3750-24PS-S in two stacks which is EOL, it has a replacement part number of WS-C3750V2-24PS-S. The current switches have a 18gigabit switching fabric across each stack and the new one has a 32gigabit switching fabric across the stack as well as full gigabit ports. The prices range from £2300-3000 per 24 port switch. I understand that we can’t be replacing all the switches at this point, but we have to make a purchase with current switches and future upgrades in mind. The future switches will have to have POE for the voip phones, we will need to find out how much watts is required for all the phones when we do replace the switching.

If we just purchase some cisco gigabit switches we will have to consider how these are going to connect to the current stack. Ideally we would replace the entire stack and plug the netapp and esxi in to two switches (with nothing else in them) on the stack to take advantage of the stacked switch fabric bandwidth. But I think for storage switches we could just use 2x Cisco Catalyst 2960S-24TS-L (£1100 inc vat) and create a new stack and connect it to the current setup. But I am not sure how well that will perform and if we will find that when we do finally get around to upgrading all the switches that we are stuck with 2960s that don’t have poe and don’t stack with a 3750 due to different stacking technology.

It might be better than to just buy two WS-C3750V2-24PS-S for the new netapp 2240 and hp gen8 esxi using iscsi? then we do get a budget for upgrading all the switches the esxi switches will stack with the rest of the switches.

any comments ?

These prices you have been quoted are very high
 
Statements like iscsi performs worse than FC, is more complicated, requires very high end switches etc simply aren't true. You don't need a iscsi hba and software initiators are fine, in fact software initiators work better than hbas (always use intel nics as their drivers are mature).

When performing testing, iSCSI always seems to perform slower from experience than FC. However, what I define as acceptable but maybe different from someone else or what the client is trying to achieve. Software initiators will always perform slower due to the way in which they work, again that's from my experience.

To answer your original question, I have found that 3750s are very good iscsi switches. They may drop a couple of packets but that is only when you are doing very large data transfers so if that's what you are doing then look at more higher end switches but of course that means more cost.

That's not what I have experienced using 3750X series switches, I have seen packet drops on the SAN when the OI is as low as 400 IOPS. If you look at the switch cli, you can see the packets being dropped. So correct this problem you enable QOS and re-allocate the buffer, this fix is on a 3750 series switch only.
 
Hi Andy,
Thanks for the feedback. I think it highly depends on the iSCSI storage platform as to what kind of performance you achieve but with some of the newer platforms (hybrid arrays) customers see a large difference in performance of their core apps & dbs.
I have many customers that have dropped FC and gone iSCSI because of this. At the end of the day though, your true performance is determined by how many spindles you use, unless its a completely different storage architecture being used.

Re dropped packets, i have only seen this on large block sizes (128K plus) as thats too much for the port buffers to handle it seems which is understandable. smaller block sizes (4k-64k) havent dropped packets on testing with 3750 switches with the arrays i have been testing with. I have seen 15000+ IOPS on smaller block sizes for both random/sequential read/write and thats down to the architecture of these new systems. When testing large block sizes, we can saturate multiple 1 gig nics but we do still see dropped packets and the switches getting stressed.

What block size have you been using on your tests?
Cheers
 
What block size have you been using on your tests?

4K, 64K 128K + in terms of the block sizes. However, in terms of network saturation, I can quiet happily when I was testing saturate 4 1GB links. As I recall, this unit peaked around 25,000 IOPS.

I normally experience this behavior from VMware / SQL servers most of the time.

However, since the switch configuration has been changed and added the following:

Code:
Cisco Stuff ;)

It seems to have cleared up most of the dropped packets.
 
Last edited:
Back
Top Bottom