Openfiler Targer and MS free iSCSI initiator connection issues.

Associate
Joined
10 Nov 2004
Posts
2,237
Location
Expat in Singapore
Hi,

I have set-up Openfiler with a nice big iSCSI target and have been able to connect with my vSphere server.

I am, however, having issues with connecting from Windows server running the free Microsoft iSCSI initiator.

If I type the server address (192.1168.2.201) in the discovery tab it lists it but no targets appear in the targets tab. If I type the server address in the target quick connect box then I get an access denied message come back.

I have tried setting a user with the name of the Microsoft Initiator and the CHAP secret matching on both the Open filer server and MS software but no change. I have tried removing the user on the Openfiler end but no change.

Any ideas ?.

Thanks
RB
 
Is the target on OpenFilter definitely configured to allow multiple simultaneous initiators? I don't believe iSCSI works this way normally. You may need to create a separate target for each initiator.
 
Good to know but I only have one initiator trying to connect at the moment.

The Openfiler box has two Intel GT nics which are bonded. Not sure if that may cause the Windows initiator an issue. Only one IP address is given to the bonded interface so it should be fine.

I will walk through the vShpere & Openfiler guide again tonight just to make sure I have not missed anything but thought I would check to see if anyone may know.

Cheers
RB
 
They do not have a dedicated subnet at the moment. Everything is on the same subnet (servers and PCs). Only a home setup so not so bad. Will move to dedicated in the near future.

RB
 
Is there multiple network paths to the iSCSI target, if so you have to tell the initiator which subnet to use.

Discover Target Portal - > Advanced

Set local adapter to 'Microsoft iSCSI Initiator'
Initiator IP: 'to the network connected to your target'
 
Just had thought. VMWare (ESX) doesn't care about characters in the target names. MS iSCSI does!!!!


It doesn't like characters like '_' in the target names. Which in my case gave similar problems to the one you describe.
 
Just had thought. VMWare (ESX) doesn't care about characters in the target names. MS iSCSI does!!!!


It doesn't like characters like '_' in the target names. Which in my case gave similar problems to the one you describe.

Ahh, that could be something. The name may have '-' and '.' to separate words in the name.

Thanks, that seems like quite a promising lead :).

RB
 
Let me know how you go on, I'm using NAS4FREE and I've currently got 34 iSCSI targets. I've starting to see volumes become unavailable intermittently when I have all connected at the same time (all be it different clients). I do a lot of development around SQL multi-node clusters. It might be that I using an old PC (AMD x64 single core, DDR2) as my NAS and it's starting to run out of grunt? Or maybe Openfiler might be a better option?
 
Well after trying and still getting nowhere with issues such as one of my block devices listed on the status screen but not available for use and then on trying to re-install getting told there are no valid drives at all to install to I stopped using OpenFiler.

I have now set it up via CentOS 6 which was surprisingly quick and easy. Connected first time from Window Server, formatted as GPT and now I have my 7.4TB raid 5 array available. Just doing a move of some data and am getting between 50MB/s & 75MB/s write with various files from a WB Green 2TB onto hardware raid 5 over a 1 GbE link. I plan to bond some ports and move on to a separate switch going forwards but not quite yet.

Still need to test with vSphere and get my second array visible but would imagine it will be fine.

RB
 
getting between 50MB/s & 75MB/s write with various files RB

I'm getting similar performance according to CrystalDiskMArk, but I have dedicated 1Gbit iSCSI links from my SAN to each of my 3 ESX hosts. The individual links are maxing out at times, so I'm thinking of a different approach. Team all three iSCSI NIC's in my SAN and team the three NIC's that I have in each ESX server. Connect all these to a layer 2 1Gbit switch and segment the Networks into VLAN's. I may then achieve better utilization of the total bandwidth?
Currently two of the NIC's in each ESX server are used as trunk network connections between ESX hosts carrying a number of VLANs (Prod, Bkup and apps).
 
You shouldn't bond nics for iSCSI traffic. You should instead be using MPIO.

My understanding of MPIO is that you only need it if you are providing multiple paths through separate NICs, otherwise you will see each each LUN target multiple times (paths * LUN). If you team (load balance) your NIC's as one interface I don't believe you need MPIO.
 
You shouldn't bond nics for iSCSI traffic. You should instead be using MPIO.

I am not concerned about redundancy provided by Multi-Path but more about available bandwidth provided by bonding in this home environment. For a business environment it may be different.

RB
 
I'm getting similar performance according to CrystalDiskMArk, but I have dedicated 1Gbit iSCSI links from my SAN to each of my 3 ESX hosts. The individual links are maxing out at times, so I'm thinking of a different approach. Team all three iSCSI NIC's in my SAN and team the three NIC's that I have in each ESX server. Connect all these to a layer 2 1Gbit switch and segment the Networks into VLAN's. I may then achieve better utilization of the total bandwidth?
Currently two of the NIC's in each ESX server are used as trunk network connections between ESX hosts carrying a number of VLANs (Prod, Bkup and apps).

Nice to hear benchmark wise. Thanks. I will give it a go with my Intel 520 120GB SSD as the starting point as the Green hdd is probably slowing things down quite a bit.

My links are dedicated from server to SAN in that each server uses a different SAN IP address relating to one of the two NICs installed for iSCSI (a third is available for management). They do, however, go through a shared switch. I would like to put a quad card in but the old board only has a single PCIe x16 slot (used by the P812) and two PCI slots. Since flashing the HP P812 with newer firmware it will only work in my HP server (not preferred) or this MSI board. It will not work in the vSphere server (Intel board). I would like to put it in a board with some more PCIe slots but as it is very much a game of chance now as to if it will work I have not taken the plunge.

Your point about VLan in your setup is a good call. Would save me having to get another switch. I will give that a go first.

Thanks
RB
 
Interesting. I will be installing an Open Filer VM on my SAN server with a fibre connection to my PC. I guess I will need to set up iSCSI in ESXI so the VM sees the storage plus on my Win 7 PC?
 
Interesting. I will be installing an Open Filer VM on my SAN server with a fibre connection to my PC. I guess I will need to set up iSCSI in ESXI so the VM sees the storage plus on my Win 7 PC?

These setups take along time to get working well, so make sure you plan it well or you will be rebuilding several times.
 
These setups take along time to get working well, so make sure you plan it well or you will be rebuilding several times.

here are my steps so far:

  1. Build RAID on Dell PERC
  2. Install Openfiler VM on EXRI
  3. Set up storage in Openfiler
  4. Connect storage in EXRI
 
I found someone with a ASUS P5Q Pro and an C2D 8400 so grabbed them as the ASUS board has dual PCIe 2.0 x16 slots. The HP P812 worked fine first time and I have now moved my Quad port Intel ET and PCI GT in to the server. The motherboard NIC and the GT are on my main network and the quad ET is bonded and on a separate subnet via a VLan on the switch.

Had lots of wasted time after installing CentOS 6.3 from the Live DVD. Seems something was fighting me for management of the NICs etc. I finally installed CentOS 6.0 minimal and upgraded (via yum) and then configured the NICs and it all worked. Installed the scsi target tools, configured the VLan on the switch and single ports on the two initiator servers. Had to open the firewall for them but everything seemed to go pretty well.

Getting between 80 & 90MB/s transferring large files from a Hitachi Green drive to the array.

ASE001s point about storage provisioning is a good one. For me it is easy... Big array for media, small array for Virtual machines. I am provisioning in one big lump. For sharing between a number of servers it can become a bit more complex with XXXGB to server 1 and XXXGB to server 2 all as logical volumes on possibly a volume group along with deciding how much to keep in reserve unallocated to add to server 1 or server 2 as their need outgrow the storage provisioned for them.

RB
 
Back
Top Bottom