Universal package specification

Eugene Gorodinsky e.gorodinsky at gmail.com
Sun Nov 29 10:48:55 PST 2009


2009/11/29  <madduck at madduck.net>:
> also sprach Eugene Gorodinsky <e.gorodinsky at gmail.com> [2009.11.29.1618 +0100]:
>> It's the same here, except that instead of money you're investing
>> time, and instead of education you get a more efficient system. At
>> least that's how I see it. I might be wrong and this system might
>> not be more efficient, if so then I would like to know where I'm
>> wrong.
>
> The point being: is the amount of time required to devise and
> implement this to the point where it's usable at the distro scale,
> and in critical use-cases, worth the efficiency gains? I don't think
> we can put a number to the time required, and this being FLOSS we
> also don't need to, really, but we do need to be realistic on the
> potential gains. This is where I am a bit sceptical.
>
>> In the best case scenario - no work needs to be done to get
>> certain software to work in a certain distro. In the worst case
>> scenario no more work needs to be done than is already being done
>> by the package maintainers. Moreover, bug reports in the best case
>> scenario will be contained in one bug database, and bugfixes will
>> be applied to upstream.
>
> You are not talking about a universal package specification anymore,
> you are talking about a universal package and distro format. Most of
> the work done by Debian maintainers is not in specifying and
> maintaining metadata, but in tying software in with the rest of the
> system. It's probably similar for other distros. If your package
> should install and integrate well with my distro, and vice versa,
> then we need to standardise a whole lot more than package specs,
> names, versioning, and filesystem layout. We'd need to standardise
> network configuration, /etc policy, init systems, kernels, and more.
>
Not really. It's possible to standardise just on shared libraries for
example. Providing the software is self-contained, it will be able to
work on compliant distributions if it just uses those libraries to
interact with the system and nothing else. Add DBus interfaces to that
and you can give developers more freedom. Most applications in my
experience only need those two things. Some applications access
devices directly of course, some others access the /proc or /sys
filesystems - basically this functionality should be in shared
libraries provided by distros. This is much more flexible, and much
less complex than standardising everything and basically making every
distro a lookalike of the others.

> Yes, in some ways that would be nice, but there'd be disadvantages.
> First of all, as I just wrote, dependence on commercial players
> would shrink, so don't expect much support from them. Second, the
> one-size-fits-all distro just doesn't exist. Debian calls itself
> "universal", and it probably gets closest to being a distro that can
> be used on the desktop as well as a high-security server, but the
> split is already pretty big.
>
> For instance, udev and *Kit are all considered progress by some
> people, but they are absolutely unnecessary on systems that just
> need to serve and in fact might decrease their stability. Debian was
> basically forced to go down that route due to the influence of
> desktop people on the kernel development, but there are other areas
> where we're still balancing as best as we can.
>
> I seem to recall Fedora/RedHat talking about standardising on
> NetworkManager. This means that all their networking-related
> software will interface with NetworkManager, and only
> NetworkManager. Thus, a distro that doesn't want to enforce one
> network configuration technique upon its users (you don't need to
> manage network connections on a server in any other way than you
> could probably better do with shell scripts) would never be able to
> benefit from Fedora's work on universal packages, because Fedora
> would not maintain their packages for Debian's ifupdown as well.
>
> However, Fedora and Debian could collaborate and use the same DVCS
> repository for any given package. This is what we are trying to
> achieve over at vcs-pkg.org: there's a set of common branches, a set
> of feature branches which I can cherry-pick if I want a given
> feature for my distro, and a set of distro-specific branches. Then
> we all build our binaries out of the same repository, binaries
> specifically assembled for the distro's use-case, at the same
> time as redundancy is minimised because all binaries stem from more
> or less the same source.
>
> Software product lines across distributions. How awesome.
>
>> >> > http://wiki.debian.org/Projects/ImprovedDpkgShlibdeps
>> >> >
>> >> I'm only aware of one such library - glibc. I don't know why this is
>> >> done though. Which other libraries do this?
>> >
>> > zlib, gcrypt, and others, in Debian at least.
>> >
>> Ok, thanks.
>
> You might want to read up on the discussion around binary
> compatibility between Debian and Ubuntu to get a feeling of the
> level of collaboration/standardisation/synchronicity required for
> what you are trying to achieve. Synchronicity is what caused Debian
> and Ubuntu, two very close relatives, to be binary *in*compatible:
> Debian moved too slow for Ubuntu, and Ubuntu moved too fast for
> Debian. You could call us lazy and them innovative, you could say
> we're volunteers and they are paid workers, or — and this is the
> only real reason I think — you could acknowledge that Debian focuses
> a lot more on stability and Ubuntu on the cutting edge, and you
> cannot bridge that gap indefinitely and universally.
>
>> I'm not touting this as an advantage, it simply seems logical to
>> do this if you want to conserve space/bandwidth.
>
> Debian Packages files are indices, and they contain all the
> information because only then are you able to search the indices.
> Therefore, if you want to avoid having to download all packages to
> be able to search across them, you need this index.
>
> Also, APT needs the meta data to resolve dependencies.
>
> To remove redundancy, you'd have to remove the metadata from the
> binary packages. That's surely doable, but would also mean that
> a single .deb file would become useless: you could not obtain meta
> data from it, and in particular, dpkg could not enforce policy and
> prevent installation of packages with unresolved dependencies.
>
> Debian has the advantage of having a single repository from which we
> provide all software. Therefore, we could actually give up the
> per-package meta data and still get a working system, although
> a brittle one.
>
> But in the RPM world, it's way more common to download .rpm files
> and install them directly, or at least it used to be. In that case,
> you need the per-package data, and if you do want to provide a yum
> repository for your users, you'll need the indices.
>
> Between the two extremes, there's no common ground. You need
> multiple representations of the same data, and that's not redundant.
> Yes, it means additional bandwidth, but I don't see a way around it.
>
>
>
> I am sorry if I sound discouraging. I may well be wrong and you
> should not use my thoughts over yours. I do think there are better
> uses of time and energy, and some are even on the way to the
> universal package, e.g. the LSB and vcs-pkg.org. Maybe it would pay
> off to have a look there to see if you can find a way to approach
> your ideas in a more bottom-up fashion?
>
> --
> martin | http://madduck.net/ | http://two.sentenc.es/
>
> "the only thing that sustains one through life is the consciousness of
>  the immense inferiority of everybody else, and this is a feeling that
>  I have always cultivated."
>                                                        -- oscar wilde
>
> spamtraps: madduck.bogus at madduck.net
>
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.10 (GNU/Linux)
>
> iEYEAREDAAYFAksSpigACgkQIgvIgzMMSnVPmQCgmTHCYdl4rk8VgxQFO0CZ+5X/
> 5gMAoJWG6BsDBfLPy7vEXaz7C4xZo41U
> =ex+8
> -----END PGP SIGNATURE-----
>
>


More information about the Distributions mailing list