SAN question

This is where SSDs are coming into the scene. Lots of the newer SSDs although capacity isn't as great they're doing massively more IOPS per disk than mechanical drives. I don't think they're a proven tech yet but where you need IOPS over throughput and capacity they might take off!

Oh SSDs are great for SANs and would have solved that problem very nicely, they are still hugely expensive though and as far as I know some of the major SAN vendors still aren't offering them (Netapp weren't last time I checked but that was a while back)
 
I strongly disagree with this, you're thinking in too limited a way about it. Bear in mind that ethernet will basically drop packets whenever it feels the need - that's a problem for storage traffic, FC will only ever drop traffic as a last resort, if you're running it beyond a few local rack's that's an issue (I appreciate it's a enterprise application but it's still relevant).

Right back at you in that you're viewing Ethernet as the transport method for iSCSI. :p
Ethernet is only half of it you have a whole IP layer on top aswell as some application layer error checking and performance enhancemnts.

Dropped packets shouldn't happen if the network is set up and planned right with proper QoS and decent hardware with a decent amount of buffer memory.
In theory a frame should only be dropped if it's corrupted in some way. FC would do the same.
But the difference in FC/iSCSI in a large SAN is often barely 5%. By far the biggest factor is the size, type and striping of disks. Getting that right can lend boosts of 30-40% and more. So in perspective all you need to be thinking about is whether the extra cost is worth 5% or would the money be better spent on something else that'd have a greater effect.
 
Well then we disagree, but I can say we've seen substancial improvements moving to fibre channel over iSCSI and it's not our network at fault, iSCSI and NAS traffic is prioritised as much as possible (which I don't like to be honest, I need video and voice to be top priority, followed by app traffic, subverting our QOS policy to prioritise storage traffic isn't good design practice).

I'm sure it would work better if I gave it a dedicated wavelength with 10Gig ethernet for the iSCSI traffic but if I'm giving it a dedicated wavelength then I may as well use FC, which just works without the fuss and when you're giving storage dedicated wavelength, the small up front capex on FC over iSCSI disappears in the grand scheme of things.

iSCSI will be better when datacenter ethernet turns up (and yet again I'll say, DCE wouldn't be in the pipeline and enterprises wouldn't use FC in preference to iSCSI if iSCSI could do the job over converged networks - we don't enjoy getting massive capex signed off!)
 
iSCSI for SME yes, for large enterprise no chance until 10gbit ethernet is EVERYWHERE.

It just isn't gonna happen for big companies. Heard a lot about NetAapp filers running NFS for VMWare but never utilised them personally.

Being a SAN admin I think FC is still here to stay for a while...

Skidilliplop: Have you deployed iSCSI for a business critical Production FTSE100 / Banking / Finance company infrastructure?
 
Skidilliplop: Have you deployed iSCSI for a business critical Production FTSE100 / Banking / Finance company infrastructure?

Yes, pretty much all our systems backend to an iSCSI SAN for their storage, including the financial and critical ones. We generally use a dedicated network for it, I.E for smaller sites a dedicated pair of switches in the storage rack or for larger ones where the capex on dedicated infrastructure is bigger we use existing network hardware aswell but it's always contained within it's own dedicated VLAN and we never trunk the VLAN over a shared link, we always use a dedicated uplink for that VLAN or routing policies to achieve the same effect.

IF you ever mix production data with SAN data you will get issues. I don't beacuse the traffic patterns differ to much to manage both effectively. As i said before, setting the network up to suit the implementation is important. If you have real time data in your production network, keep it separate of your SAN traffic.
When i referred to QoS earlier i meant it whithin the SAN itself as a separate entity to a production network. In the sense of prioritising financial and high transaction rate systems over stuff like web server backends and fileserver storage requests to ensure the right systems get the right storage performance.
QoS falls down when you have two equally realtime traffic types thus you shouldn't mix them over the same shared links.
 
The main problem there was that company was it's love for blackberry's and BES consumes IOPS, a heavy outlook user will need about 1 IOP, a blackberry could need 3 IOPS so suddenly your solution needs to do the equivalent of supporting triple the number of users.

Which version of exchange are you using?

I've heard good things about 2007 being less intensive...
 
Which version of exchange are you using?

I've heard good things about 2007 being less intensive...

It was 2003 we were using, I don't know if 2007 will be better, depends on RIM rewriting the BES software I guess but I would hope it will (or has) got better.
 
I've only had experience with one FC installation, but plenty with iSCSI. I have not used with FCoE.

The two most important factors for using FC are;
No out-of-order packets. This is vitally important with large clusters needing vanishingly low latency SQL transactions, and any kind of large dataset manipulation. No way around it, FC is superior for this.
In the past, the I/O of mechanical disks was an issue, but facing up to a proper installation of FC, you can rightly look at SSD these days.

The efficiency of the protocol is also much higher. A proper configured 2Gbs FC HBA can easily push through 380MB/s sustained (full duplex), whereas I've handed iSCSI SANs on gigabit networks that struggled to push out 230MB/s simply because of crap TCP/IP Offload.

Reliability of FC is also designed to be 100%. It's secure top to bottom. With TCP/IP you're always fighting a battle with random bits of incorrect data, which can majoritively be repaired, but the rate of corruption is still significantly higher than FC.
This is not good if you're dealing with high value transactions, a 0 turning into a 9 is a disaster.

When you're staring down 10Gbit Ethernet HBAs, Fibre Channel HBAs are around the same cost, same with Ethernet port costs.

People seek out FCoE for convergence between their Ethernet and FC solutions, but both definitely have a role to play.
To say iSCSI is better than FC is naive - if someone with more experience with FC comes in and tells me otherwise, I'll happily shut up :)
 
iSCSI for SME yes, for large enterprise no chance until 10gbit ethernet is EVERYWHERE.

It just isn't gonna happen for big companies. Heard a lot about NetAapp filers running NFS for VMWare but never utilised them personally.

Being a SAN admin I think FC is still here to stay for a while...

Skidilliplop: Have you deployed iSCSI for a business critical Production FTSE100 / Banking / Finance company infrastructure?

iSCSI won't necessarily take hold in the Enterprise with 10GbE, it will all be FCoE as it provides a lossless converged fabric as well as some funky sr-iov options.

You are quite right that FC is going nowhere though.
 
It does in fact happen for big companies. I personally worked with NetApp, Cisco and our clients fine tuning the NFS stack in ONTap and the VMware NFS implementation. In fact the NetApp storage Best Practices has a lot of my work in it. Admittedly there is a big difference between iSCSI and NFS but ultimately it's traversing the same medium / switchgear. I just wouldn't be able to justify to myself or Finance to start shelling out for expensive HBA's and fibre switches when the same reliability / performance can be had for a fraction of the cost. There just isn't a place now for fibre, especially in new installs, FCoE will eventually takeover once it matures. You try and duplicate / mirror fibre-only traffic over a large WAN or between two datacentres / SAN's, you would have a heart-attack from seeing the costs involved.

As much as I'd love to say you're right and the costs are as you say very high. I've recently started working at a new firm and they've just deployed 6* 4gbit fibre links to our backup datacentre just for SAN traffic not to mention multiple gig links for ethernet so I think those that want things done properly, still do it the expensive way....

Sounds like fun working with VMWare and NetAPP, what sector was the client in?
 
Back
Top Bottom