Ubuntu Server 14.04 - iSCSI and Multipath

Soldato
Joined
25 Nov 2004
Posts
3,792
Hi all

I have been given a project at work that is some what out of my ball park. I need to install Ubuntu Server and attach an iSCSI LUN to it for use with Percona (MySQL) Server.

So I have Ubuntu installed, I have the iSCSI LUN presented to the server. Multipathing is installed and working however I am still seeing 8 disk drives for the presented iSCSI LUN. I don't really know what I am meant to be seeing as this is all new to me. I installed the server with LVM too, which I am trying to get my head around.

Anyway, any guidance for me here perhaps? Output of cat /proc/partitions:

root@NDC-LMS-PRC-S01:~# cat /proc/partitions
major minor #blocks name

8 0 143338560 sda
8 1 249856 sda1
8 2 1 sda2
8 5 143088128 sda5
252 0 9764864 dm-0
252 1 5345280 dm-1
252 2 127950848 dm-2
8 16 52428800 sdb
8 32 52428800 sdc
8 48 52428800 sdd
8 80 52428800 sdf
8 64 52428800 sde
8 96 52428800 sdg
8 112 52428800 sdh
8 128 52428800 sdi
252 3 52428800 dm-3
root@NDC-LMS-PRC-S01:~#

sdb through i is what I am sure I shouldn't be seeing? dm-3 is the LVM map for the iSCSI LUN?

Thanks for your help, really don't know what I am doing here... as you can probably tell.
 
I created a partition on the disk and now it seems I have 16 instances of the disk showing? 8 physical and 8 partitions? Also 2 LVM mappings?

root@NDC-LMS-PRC-S01:~# cat /proc/partitions
major minor #blocks name

8 0 143338560 sda
8 1 249856 sda1
8 2 1 sda2
8 5 143088128 sda5
252 0 9764864 dm-0
252 1 5345280 dm-1
252 2 127950848 dm-2
8 16 52428800 sdb
8 17 52424704 sdb1
8 48 52428800 sdd
8 49 52424704 sdd1
8 32 52428800 sdc
8 33 52424704 sdc1
8 112 52428800 sdh
8 113 52424704 sdh1
8 80 52428800 sdf
8 81 52424704 sdf1
8 96 52428800 sdg
8 97 52424704 sdg1
8 64 52428800 sde
8 65 52424704 sde1
8 128 52428800 sdi
8 129 52424704 sdi1
252 3 52428800 dm-3
252 4 52424704 dm-4
 
Whats the output of multipath -ll ?

root@NDC-LMS-PRC-S01:~# multipath -ll
Error: : Inappropriate ioctl for device
cciss TUR failed in CCISS_GETLUNINFO: Inappropriate ioctl for device
PureStorage (3624a9370931147961ad4ca080001102d) dm-3 PURE ,FlashArray
size=50G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 1:0:0:1 sdb 8:16 active ready running
|- 2:0:0:1 sdc 8:32 active ready running
|- 3:0:0:1 sdd 8:48 active ready running
|- 4:0:0:1 sde 8:64 active ready running
|- 5:0:0:1 sdf 8:80 active ready running
|- 6:0:0:1 sdg 8:96 active ready running
|- 7:0:0:1 sdi 8:128 active ready running
`- 8:0:0:1 sdh 8:112 active ready running
root@NDC-LMS-PRC-S01:~#
 
So we have 8 disks/paths presented from your SAN over a 50G LUN/volume on your flash SAN.

You may wish to read the SAN documentation for the correct settings on multipath, as your failing your TUR (Test Unit Readyness), also you have problems with your ioctl (input/output control). You can modify your multipath settings under /etc/multipath.conf.

You can also run echo "show config" | multipathd -k to get the default configs, or look in /usr/share/doc/device-mapper-multipath-*/multipath.conf.defaults. You want to match vendor, model or WWN.

You can also run multipath -v3 to see what multipath is going to do with your disks/config before you restart the multipathd service.

You also probably want to make use of use_friendly_names, so you can work with say /dev/mapper/mpatha as your multipath device, you can then place logical volumes, partitions, filesystems etc on top of that as though it was your normal disk.
 
Last edited:
Thank you for all your help, everything you have said, I have pretty much done.

I got the multipath.conf settings from my vendor (Pure Storage) and applied them. I still get the ioctl error. I blacklisted everything from multipath apart from the WWID of the SAN. Still errors for ioctl.

However, everything is working. I am using the friendly name I specified to mount the storage. I moved the MySQL data onto the storage and gave it a couple of reboots and made sure it came back up and everything *seems* ok. I guess I will find out at some point :p

Again, thank you for taking the time with me :)
 
Back
Top Bottom