Hi,
I'm testing some new windows based software raid application in server 2012 r2 and I'm trying to use it to do a test store for my vsphere lab. Using Starwind vSAN and the iSCSI protocol from the server I can get a consistant 400Mbps copy from the cluster to the server, but using the built in NFS v3 server I'm getting about 12Mbps, yes that's right 12; that equates to around 1.5MB/s from the cluster to the server.
I tried this again on a non-raid 5 array (stratight to sata disk) and got the same speeds.
Storage server is an SM X8DTE-F with 8GB Ram, Xeon L5630 and vSphere cluster is a combined SM x8DTL and Dell R410. All ports are connected to 24 port 1GBe switch.
If I try a copy from my windows 10 machine to the storage server using SMB I get saturated HDDs due to SMB multichannel... but for whatever reason I cannot get NFS performance from this system...
Anyone got any pointers for tuning or know why it might be performing badly?
Thanks,
Chris
I'm testing some new windows based software raid application in server 2012 r2 and I'm trying to use it to do a test store for my vsphere lab. Using Starwind vSAN and the iSCSI protocol from the server I can get a consistant 400Mbps copy from the cluster to the server, but using the built in NFS v3 server I'm getting about 12Mbps, yes that's right 12; that equates to around 1.5MB/s from the cluster to the server.
I tried this again on a non-raid 5 array (stratight to sata disk) and got the same speeds.
Storage server is an SM X8DTE-F with 8GB Ram, Xeon L5630 and vSphere cluster is a combined SM x8DTL and Dell R410. All ports are connected to 24 port 1GBe switch.
If I try a copy from my windows 10 machine to the storage server using SMB I get saturated HDDs due to SMB multichannel... but for whatever reason I cannot get NFS performance from this system...
Anyone got any pointers for tuning or know why it might be performing badly?
Thanks,
Chris