Poor write performance on iSCSI

Associate
Joined
1 Dec 2005
Posts
803
Just having my first play with iSCSI using the MS 3.3 target software on Server 2008 R2 and my Win 7 x64 workstation and the RAID0 array in my server (4x250gb). I'm only on a 1Gb link at the moment (using a basic 5 port Netgear switch) but here's my ATTO bench after updating the NIC drivers on my workstation (Marvell Yukon PCI-E onboard) and the server (Broadcom Netxtreme II onboard):

Pjsgj.png


The reads hit the 1Gb limit nicely but the writes are pretty poor. An ATTO bench on the server locally sees reads and writes topping out around 230mb/s, so there is clearly a bottleneck somewhere with the iSCSI setup.

Has anyone got any suggestions? How much of this is going to be down to the Marvell adapter and/or the basic switch? Anything I can tweak on the iSCSI target? Jumbo frames maybe?

Thoughts welcome :)
 
Thanks for the suggestion - I tried disabling all the offloading settings one at a time on my workstation, which didn't make a difference. Then did the same on the server and still no change. So I don't think it's that.

As part of a query I had elsewhere to this question, I mounted the VHD on the server and bench'd against that:

F56Pk.png


The writes are even worse! Here's the base metal for comparison:

bUocE.png


So it might be something to do with the VHD image itself...

Also, RAID0 because I'm testing this idea out at the moment. I want to learn what potential bottlenecks are going to come up before I buy some Infiniband cards, faster disks and a PERC6 to run RAID10.
 
It's fixed size as far as I can tell (there aren't any options). It's allocated 500Gb to the file on the base metal partition.

This thread suggests that I can't use a pass-through disk directly with the iSCSI target. I might try Starwind and see if that's any better.
 
I've just uninstalled the MS iSCSI Target software and installed the free version of StarWind and am already seeing a vast improvement:

JaBFj.png


Taken from my Win7 box with the built-in MS iSCSI Initiator. I think the write speed drop-off is going to be down to the cache settings (went with the defaults) and/or the fact that the virtual disk is in 'thin provisioning mode' so it's growing rather than fixed size. Still, much closer to the performance of the base metal before the 1GigE limit kicks in.

Progress :D
 
The server's got a Dell SAS 6i/R controller at the moment (note not a PERC) which only supports RAID0 and RAID1. From what I've read a lot of people flash them with LSI firmware to turn them into basic HBAs which would suggest to me that the RAID functionality in the Dell version is possibly offloaded to the CPU. Going to upgrade to a PERC 6i/R soon.

Anyway, according to the MedaRAID console the RAID0 array has:

Stripe size: 64KB
Disk Cache Policy: Enable
Read Policy: No Read Ahead
Current Write Policy: Write Through
Default Write Policy: Write Through
IO Policy: Direct IO

I can't configure any of those options either through the BIOS and with the console all I can do is enable/disable Disk Cache Policy.
 
Yeah Starwind is using RAM for cache. The basic SAS cards have virtually no cache on them (something like 64kb I seem to remember) where as the PERC cards start at 256mb. Going to upgrade and re-visit this subject :)
 
Back
Top Bottom