[systemd-devel] 'udevadm settle' brakes lvm on top of imsm raid

Lennart Poettering lennart at poettering.net
Fri May 29 02:12:34 PDT 2015


On Fri, 29.05.15 11:15, Oleg Samarin (osamarin68 at gmail.com) wrote:

> Thanks,
> 
> I made more debbuging with LVM and I realised, that lvm always uses the
> last device it has scanned. Scanning of devices is called by udev rules
> using "lvm pvscan --cache <device>" command. So the reason of using
> /dev/sdb2 instead of /dev/md126p2 is that udev runs lvm in the following
> order:
> 1. lvm pvscan --cash /dev/md126p2
> 2. lvm pvscan --cash /dev/sda2
> 3. lvm pvscan --cash /dev/sdb2
> 
> But there were no  /dev/sda2 and /dev/sdb2 before running anaconda at all.
> 
> [root at localhost ~]# ls -ld /dev/md* /dev/sd*
> drwxr-xr-x. 2 root root      120 May 29 03:43 /dev/md
> brw-rw----. 1 root disk   9, 126 May 29 03:43 /dev/md126
> brw-rw----. 1 root disk 259,   0 May 29 03:43 /dev/md126p1
> brw-rw----. 1 root disk 259,   1 May 29 03:43 /dev/md126p2
> brw-rw----. 1 root disk   9, 127 May 29 03:43 /dev/md127
> brw-rw----. 1 root disk   8,   0 May 29 03:43 /dev/sda
> brw-rw----. 1 root disk   8,  16 May 29 03:43 /dev/sdb
> brw-rw----. 1 root disk   8,  32 May 29 03:43 /dev/sdc
> brw-rw----. 1 root disk   8,  33 May 29 03:43 /dev/sdc1
> brw-rw----. 1 root disk   8,  34 May 29 03:43 /dev/sdc2
> brw-rw----. 1 root disk   8,  48 May 29 03:43 /dev/sdd
> brw-rw----. 1 root disk   8,  49 May 29 03:43 /dev/sdd1
> brw-rw----. 1 root disk   8,  50 May 29 03:43 /dev/sdd2
> brw-rw----. 1 root disk   8,  64 May 29 03:43 /dev/sde
> 
> They appear only after launching anaconda:
> 
> [root at localhost ~]# ls -ld /dev/md* /dev/sd*
> drwxr-xr-x. 2 root root      120 May 29 03:47 /dev/md
> brw-rw----. 1 root disk   9, 126 May 29 03:47 /dev/md126
> brw-rw----. 1 root disk 259,   2 May 29 03:47 /dev/md126p1
> brw-rw----. 1 root disk 259,   3 May 29 03:47 /dev/md126p2
> brw-rw----. 1 root disk   9, 127 May 29 03:46 /dev/md127
> brw-rw----. 1 root disk   8,   0 May 29 03:47 /dev/sda
> brw-rw----. 1 root disk   8,   1 May 29 03:47 /dev/sda1
> brw-rw----. 1 root disk   8,   2 May 29 03:47 /dev/sda2
> brw-rw----. 1 root disk   8,  16 May 29 03:47 /dev/sdb
> brw-rw----. 1 root disk   8,  17 May 29 03:47 /dev/sdb1
> brw-rw----. 1 root disk   8,  18 May 29 03:47 /dev/sdb2
> brw-rw----. 1 root disk   8,  32 May 29 03:46 /dev/sdc
> brw-rw----. 1 root disk   8,  33 May 29 03:46 /dev/sdc1
> brw-rw----. 1 root disk   8,  34 May 29 03:46 /dev/sdc2
> brw-rw----. 1 root disk   8,  48 May 29 03:47 /dev/sdd
> brw-rw----. 1 root disk   8,  49 May 29 03:47 /dev/sdd1
> brw-rw----. 1 root disk   8,  50 May 29 03:47 /dev/sdd2
> brw-rw----. 1 root disk   8,  64 May 29 03:47 /dev/sde
> 
> So the root problem is not in lvm. The root problem is why devices
> "/sd[ab]?" appear? They shoud not exist because of /dev/sd[ab] are parts of
> /dev/md126 raid.

Nope. Things don't work that way. If you have any form of software
raid then both the underlying block devices and the resulting virtual
block device will be exposed with device nodes in /dev. The underlying
block devices don't suddenly disappear just because they are used in a
RAID setup...

Again, please ask the MD raid or LVM people for help. This is a
systemd mailing list, and this is not the right place to ask questions
about MD raid or LVM.

Thank you,

Lennart

-- 
Lennart Poettering, Red Hat


More information about the systemd-devel mailing list