Resetting udev mappings for a FC array

Associate
Joined
24 Jun 2007
Posts
1,869
Location
Landan.
Anyone any idea how I can reset udev's device mappings? I've got four 14-disk SCSI arrays connected via FC, and I've just added another connection for each tray. Udev has now mapped each drive more than twice (so I've got /dev/sda, /dev/sdaa through to /dev/sdz, /dev/sdza) - basically it's fubared.

I've been looking in /etc/udev/rules.d/ (or is that conf.d?) - but that just houses the scripts that pick the dev names dynamically - I guess I need to find where it's storing the dev mappings (/dev/sdaa UUID=blah) and delete them all (bar the boot drive) to allow it to do it again.

Having two dev mappings to one drive is fine (and needed), I'll be using multipath to 'bond' the devices, but having something stupid like 600 mappings isn't :p

Hopefully a Linux ninja is about!
 
Thanks for the pointers.

Yeah - there's two connections to each tray (i.e. two cables) - so I would have thought there would be two device mappings per drive, just as there was one when there was one connection. There are two paths to each device, so each device should have two mappings (there's no FC switches involved in this btw...).

As it stands now, there are more than four device mappings per device - tray 2 disk 3 can be /dev/sdbc /dev/sdza /dev/sdk /dev/sdby /dev/sdw /dev/sdda - hence why I think something's gone a tad wrong, and want to know if there's a way to reset udev so it maps all the devices like it was the first time.

Cheers for the reply!
 
Last edited:
Argh - multipath now setup and working, but scripting the creation of 28 RAID1 arrays isn't going to be nice using the devices in /dev/disk/by-name :(
 
Right - got the script sorted, wiped the partition table from all 56 disks, but now I'm having issues with mdadm.

On an Ubuntu box, you can use the /dev/md directory to create your RAID devices, a la:

Code:
mdadm -q --create /dev/md/mda --level=1 --raid-devices=2 /dev/sde /dev/sdb #A1B1

So on the SLES box I'm doing:

Code:
# mdadm -q --create /dev/md/mda --level=1 --raid-devices=2 /dev/disk/by-name/32000000c50a2e2a8 /dev/disk/by-name/32000000c50ad34bd
mdadm: /dev/md/mda does not exist and is not a 'standard' name so it cannot be created
However on SLES, it doesn't seem to like you creating your bespoke device names - but Teradata requires that the device name ends in a letter rather than a number.

Whatsmore, if I try and create a RAID array using /dev/md[0-9] like so:

Code:
mdadm -q --create /dev/md0 --level=1 --raid-devices=2 /dev/disk/by-name/32000000c50a2e2a8 /dev/disk/by-name/32000000c50ad34bd

I get an error message:

Code:
mdadm: Cannot open /dev/disk/by-name/32000000c50a2e2a8: Device or resource busy
mdadm: Cannot open /dev/disk/by-name/32000000c50ad34bd: Device or resource busy
mdadm: create aborted

:(

Any ideas would be appreciated!
 
Nevermind sorted it with a reboot, first pair are syncing - don't think TD is going to like the numbers though, anyone any idea how to coax SLES into allowing bespoke md names?
 
Cheers for the pointers :) Any ideas on the mdadm naming conventions problem? I'm 99% sure that Teradata isn't going to like them ending with a number...

The physical device names must end in a letter (e.g., /dev/sdc or /dev/
cciss/c0d0a).
The partition device names must end in a number immediately after the
letter of the physical device name (e.g., /dev/sdc1 or /dev/cciss/
c0d0a1).

:(
 
Back
Top Bottom