Resetting udev mappings for a FC array

Associate
Joined
24 Jun 2007
Posts
1,869
Location
Landan.
Anyone any idea how I can reset udev's device mappings? I've got four 14-disk SCSI arrays connected via FC, and I've just added another connection for each tray. Udev has now mapped each drive more than twice (so I've got /dev/sda, /dev/sdaa through to /dev/sdz, /dev/sdza) - basically it's fubared.

I've been looking in /etc/udev/rules.d/ (or is that conf.d?) - but that just houses the scripts that pick the dev names dynamically - I guess I need to find where it's storing the dev mappings (/dev/sdaa UUID=blah) and delete them all (bar the boot drive) to allow it to do it again.

Having two dev mappings to one drive is fine (and needed), I'll be using multipath to 'bond' the devices, but having something stupid like 600 mappings isn't :p

Hopefully a Linux ninja is about!
 
Are you trying to make FC re-scan?
Code:
echo 1 >/sys/class/fc_host/host?/issue_lip
Where ? is the host number, there's one for each FC port on the host.
Code:
echo "- - -" >/sys/calss/scsi_host/host?/scan
For the same host number as before.

Are you trying to make the paths line up? in /dev?
Seriously? why? Stop doing that. Every path is a volume with the UUID of the actual volume and multipathd understands this. You shouldn't need to change anything in udev.

And multipathing
Then you just need to unblacklist the vendor ID and product ID in /etc/multipath.conf as well as specify options required by the storage controller.

Then just poke multipathing to add paths with
Code:
udevsettle || udevadm settle || sleep 10 #allow udev to expose all the paths
multipath
multipath -ll #this is just a concise listing


Edit: Are you expecting only two paths per device?
A path being a route from one port to another you should have, in each fabric, host ports times controller ports paths. So two host ports and two controller ports is 4 paths per volume.
 
Last edited:
Thanks for the pointers.

Yeah - there's two connections to each tray (i.e. two cables) - so I would have thought there would be two device mappings per drive, just as there was one when there was one connection. There are two paths to each device, so each device should have two mappings (there's no FC switches involved in this btw...).

As it stands now, there are more than four device mappings per device - tray 2 disk 3 can be /dev/sdbc /dev/sdza /dev/sdk /dev/sdby /dev/sdw /dev/sdda - hence why I think something's gone a tad wrong, and want to know if there's a way to reset udev so it maps all the devices like it was the first time.

Cheers for the reply!
 
Last edited:
Argh - multipath now setup and working, but scripting the creation of 28 RAID1 arrays isn't going to be nice using the devices in /dev/disk/by-name :(
 
Right - got the script sorted, wiped the partition table from all 56 disks, but now I'm having issues with mdadm.

On an Ubuntu box, you can use the /dev/md directory to create your RAID devices, a la:

Code:
mdadm -q --create /dev/md/mda --level=1 --raid-devices=2 /dev/sde /dev/sdb #A1B1

So on the SLES box I'm doing:

Code:
# mdadm -q --create /dev/md/mda --level=1 --raid-devices=2 /dev/disk/by-name/32000000c50a2e2a8 /dev/disk/by-name/32000000c50ad34bd
mdadm: /dev/md/mda does not exist and is not a 'standard' name so it cannot be created
However on SLES, it doesn't seem to like you creating your bespoke device names - but Teradata requires that the device name ends in a letter rather than a number.

Whatsmore, if I try and create a RAID array using /dev/md[0-9] like so:

Code:
mdadm -q --create /dev/md0 --level=1 --raid-devices=2 /dev/disk/by-name/32000000c50a2e2a8 /dev/disk/by-name/32000000c50ad34bd

I get an error message:

Code:
mdadm: Cannot open /dev/disk/by-name/32000000c50a2e2a8: Device or resource busy
mdadm: Cannot open /dev/disk/by-name/32000000c50ad34bd: Device or resource busy
mdadm: create aborted

:(

Any ideas would be appreciated!
 
Nevermind sorted it with a reboot, first pair are syncing - don't think TD is going to like the numbers though, anyone any idea how to coax SLES into allowing bespoke md names?
 
for future reference, a harsher reset of FC would be

Code:
multipath -F
service multipathd stop
modprobe -r qla2xxx #for qlogic, emulex is lpfc...
modprobe qla2xxx
udevsettle
service multipathd start
multipath -ll

And yes, without a switch you should indeed be seeing two paths per lun, not sure how you got more, maybe a quirk of logging in to the drives directly rather than using a storage controller.
 
Last edited:
Cheers for the pointers :) Any ideas on the mdadm naming conventions problem? I'm 99% sure that Teradata isn't going to like them ending with a number...

The physical device names must end in a letter (e.g., /dev/sdc or /dev/
cciss/c0d0a).
The partition device names must end in a number immediately after the
letter of the physical device name (e.g., /dev/sdc1 or /dev/cciss/
c0d0a1).

:(
 
Back
Top Bottom