[lvm-team] Soliciting feature requests for development of an LVM library / API
Dave Wysochanski
dwysocha at redhat.com
Tue Dec 16 08:33:53 PST 2008
On Mon, 2008-12-15 at 20:46 +0000, Alasdair G Kergon wrote:
> On Mon, Dec 15, 2008 at 03:00:41PM -0500, David Zeuthen wrote:
> > I'm not sure we want to extend the udev database; in my view it's
> > supposed to be a small and efficient mechanism that allows one to
> > annotate directories in /sys with additional information that we collect
> > in user space.
>
> So we need another database to 'wrap' around the udev one.
>
> Could the udev database at least store 'claiming subsystem' information?
> e.g. that device X is 'claimed' by LVM2; device 'Y' is 'claimed' by md etc.
>
> > FWIW, in many ways, one can think of the udev database as an extension
> > of sysfs insofar that attributes in sysfs represents information / state
> > exported by the kernel driver while attributes in the udev database
> > represents information / state exported by a user space programs /
> > daemons.
>
> Just state, not new classes of entities that have no correspondance with
> sysfs (such as 'Volume Groups')?
>
> > 3. Finally we can discuss how this information can be used to implement
> > policy by writing very simple udev rules that leverages the info in
> > the udev database defined in 1.
> > For example, one thing many people (desktop developers like me
> > but also people working on initramfs/booting (jeremy, davej)
> > and also anaconda) probably want to request is that device-mapper
> > and LVM ship with udev rules that uses the information defined in 1.
> > above to implement a policy that automatically assembles LV's from
> > PV's. If we solve 1. correctly, this *shouldn't* be more complicated
> > than a simple one-liner udev rule.
>
> Those rules - triggers - depend on these additional entities (Volume Groups).
>
> The way we store and index and trigger using this Volume Group information is
> critical to this whole exercise and has to be resolved before we can really
> make much more progress IMHO.
>
Could we do something like this. Add a LVM2_VG_PARTIAL=0/1 env variable
to indicate whether all devices in the VG are currently in the system?
Similarly, could we add LVM2_LV<insertlvname>_PARTIAL=0/1?
Example:
vg1 contains 2 PVs: /dev/sda and /dev/sdb
vg1 contains 1 striped LV (on /dev/sda & /dev/sdb): lv1
Assume no devices are currently plugged in (or early boot). Then the
devices get plugged in sequentially starting at /dev/sda:
<add event /dev/sda>
P: /block/sda
E: LVM2_VG_PARTIAL=1
E: LVM2_PV_UUID=in3VUB-M9EB-3RKm-5R8Y-HyLQ-1YNj-VkjkUW
E: LVM2_VG_NAME=VolGroup00
E: LVM2_LV1_PARTIAL=1
<add event /dev/sdb>
P: /block/sdb
E: LVM2_VG_PARTIAL=0
E: LVM2_PV_UUID=VkjkUW-5R8Y-3RKm-M9EB-1YNj-HyLQ-in3VUB
E: LVM2_VG_NAME=VolGroup00
E: LVM2_LV1_PARTIAL=0
P: /block/sda
E: LVM2_VG_PARTIAL=0
E: LVM2_PV_UUID=in3VUB-M9EB-3RKm-5R8Y-HyLQ-1YNj-VkjkUW
E: LVM2_VG_NAME=VolGroup00
E: LVM2_LV1_PARTIAL=0
Note this doesn't work as well if /dev/sda is a PV with no metadata
(LVM2_VG_NAME will be '' and LVM2_VG_PARTIAL and LVM2_LV1_PARTIAL is
undefined).
The main problem I see is the second event not knowing which of the
other devices are included in the VG. So how can we avoid scanning the
whole system again or querying the udev database for all devices related
to this VG?
More information about the devkit-devel
mailing list