[Xcb] [ANNOUNCE] xcb-util 0.3.9

Josh Triplett josh at joshtriplett.org
Wed Jun 6 15:13:16 PDT 2012

On Wed, Jun 06, 2012 at 02:10:47PM -0700, Jeremy Huddleston wrote:
> On Jun 6, 2012, at 4:04 AM, Josh Triplett <josh at joshtriplett.org> wrote:
> > On Tue, Jun 05, 2012 at 10:03:44PM -0700, Jeremy Huddleston wrote:
> >> On Jun 5, 2012, at 6:35 PM, Josh Triplett <josh at joshtriplett.org> wrote:
> >>> I agree with your statement that from a functional standpoint this holds
> >>> true: the linker doesn't seem to enforce the minor version rule, so you
> >>> can build against a newer library and run with an older one, or vice
> >>> versa, as long as the major version matches.  The linker will complain
> >>> if you use a symbol that it can't resolve, though.
> >> 
> >> As it should (well unless the symbol is weak and can be checked for at
> >> runtime).
> > 
> > True, though for most symbols that doesn't make sense, since you'd have
> > to write things like "if (!xcb_useful_function) aieee();". :)
> Well, if aieee() is really needed, then you wouldn't check for it, you'd
>  just use it and bail out with the dynamic linker complaining that it
> couldn't resolve.

You don't necessarily have that option with a weak symbol.  Unless you
mean that the program can, on a symbol-by-symbol basis, choose whether
to use the symbol as weak or non-weak?  That seems feasible, but not
compatible with how most library header files normally define function

> If you can avoid using it, you would do something like:
> if (strlcpy) {
>    strlcpy(...);
> } else {
>    strncpy(...);
>    ...;
> }

In that case, I'd suggest just using strncpy unconditionally, or writing
your own version of strlcpy with a compatible interface and linking it
in if libc doesn't have one.  I tend to subscribe to the Linux kernel's
style of never including #ifdef in .c files, and I consider code like
the above gross for similar reasons; it strongly suggests the need for
an abstraction layer to not have to deal with that at each call site.

> > Better to use symbol versioning or similar to let the dynamic linker
> > tell you at the start of your program that a symbol doesn't exist.  
> Why?  If you can do something in the case that it doesn't exist, that
> should be an option.

That falls under the case I mentioned below ("Or, if you really have
written your program so that you can cope with the absence of some
functionality,").  That doesn't represent the common case, though.

> > Or,
> > if you really have written your program so that you can cope with the
> > absence of some functionality, consider either using dlopen/dlsym to
> > make that explicit or otherwise having a way to easily verify the
> > functionality you need without having to test every symbol for NULLness.
> No, that's a horrible solution.  The code snippet above (for strlcpy)
> makes it easily accessible for developers.  Doing something with dlsym
> is ugly in comparison (and IMO would just cause developers to NOT use
> the new functionality):
> size_t (*strlcpy_maybe)(char *, const char *, size_t);
> strlcpy_maybe = dlsym(RTLD_DEFAULT, "dlsym");
> if (strlcpy_maybe) {
>    strlcpy_maybe(...);
> } else {
>    strncpy(...);
>    ...;
> }

People expect that dlsym might fail.  For the most part, people *don't*
expect that a function defined in a header file might point to NULL;
they'll just call it, and segfault when it points to NULL.  Plus, if you
use dlopen/dlsym, you can cope with the complete absence of a library on
the system.

If you define an entirely new interface, you can define it using weak
symbols, but programmers will still trip over it unless you provide a
convenient way to say "no, really, I don't want to do runtime
detection, I just want to refuse to run if the functionality I expect
doesn't exist".

Among other things, I'd rather have an interface like the syscall
interface, where calling a non-existent syscall *works* and produces
ENOSYS.  Then, code that always says "die if the syscall fails" will
die, and code that uses the syscall for optional functionality will
gracefully fall back.  More importantly, I then don't need a conditional
at every callsite.

> >>> In particular, the minor version serves as a hint to the programmer that
> >>> if they link against libABC.so.1.1, they might or might not successfully
> >>> run against libABC.so.1.0, depending on what symbols they used.
> >> 
> >> IMO, that should be annotated in header files in a way that allows those
> >> symbols to be weak linked and checked for at runtime (and thus go down an
> >> alternative codepath if unavailable).
> > 
> > Not unless that gets wrapped in some kind of interface that avoids the
> > need to check all used symbols against NULL before using them; I'd
> > prefer to make that the dynamic linker's job.
> Yes, the dynamic linker will bail on you if you try to actually *use* the
> symbol, but you still need to check it.  I'm not sure what kind of interface
> you want.  This seems rather straightforward to me:
> if (strlcpy) {
>    strlcpy(...);
> } else
> #endif
> {
>    strncpy(...);
>    ...;
> }

No, I'd rather the interface looked like this:


Or, if I don't want to count on that, and I don't want to provide a
compatible strlcpy replacement via autoconf or similar:


That applies to the case where I need something with strlcpy's
functionality unconditionally, and only the implementation varies.  In
the case of something like XInput 2 support, a library that wants to use
XInput 2 iff available could use dlsym to use it conditionally (in which
case they work even with the library unavailable).  Such a library could
also use weak symbols, though that seems both more error-prone and more
difficult to specify in a header file (you'd need either separate
headers for weak and non-weak usage or some kind of #define WEAK_XI2
before including the header file).

Personally, I'd rather have a wrapper interface that looks like
unconditional function calls with error handling, rather than function
calls conditional on the function pointer itself.  Any approach that
makes every program and library author write their *own* wrappers seems
like a problem; why force everyone to write duplicate code for the
common case?

But in preference to all of those approaches, I'd rather just require
XI2 support in the library, and only conditionally handle the case where
the server doesn't have it.

In any case, this seems like a far tangent from the issue of removing
symbols from a library. :)

> >>> Removing or changing symbols
> >>> breaks that assumption; adding symbols doesn't.
> >>> 
> >>> Libraries typically introduce symbol versioning to handle more complex
> >>> cases that the linker can't catch on its own, namely a *change* to an
> >>> existing symbol, such as by adding a new parameter to a function, or
> >>> more subtly by changing the layout of a data structure used by a
> >>> function.  In that scenario, the linker will see no problem, but the
> >>> program will break at runtime, precisely as if you cast a function
> >>> pointer to a different type with different parameters and tried to call
> >>> it (because, effectively, you did).  Symbol versioning lets you maintain
> >>> ABI (though often not API) compatibility in this case, by having old
> >>> programs use a compatibility symbol that knows how to handle the old
> >>> calling signature.
> >> 
> >> Yeah, we essentially have that same mechanism in place, usually for
> >> cancelable/non-cancelable variants, legacy vs UNIX-conforming variants,
> >> changes to 'struct stat', ...
> > 
> > Any notable differences between ELF symbol versioning and OS X dylib
> > symbol versioning?
> There is quite a bit of difference, but the high level idea is the same.
> First of all, we don't actually use versions.  We use variants.  For
> example:
> $ nm -arch i386 /usr/lib/system/libsystem_c.dylib | grep nftw
> 000aa2e4 T _nftw
> 0000b7a0 T _nftw$INODE64$UNIX2003
> 000b207c T _nftw$UNIX2003
> _nftw is the legacy function which is not SUSv3 compliant and uses the older struct stat.
> _nftw$UNIX2003 is SUSv3 compliant, and it uses the older struct stat.
> _nftw$INODE64$UNIX2003 is SUSv3 compliant, and it uses the newer struct stat.
> The developer determines which version they want at build time.  If they
> want their binary to run on Tiger (which didn't have the new struct stat
> or compliant function), then the output asm will use nftw.  If they are
> building for the current OS (and don't explicitly request the older struct
> or legacy behavior), then the compiler will output asm that uses
> _nftw$INODE64$UNIX2003.  This is all managed by the C-pre-processor and
> header file goop.
> With ELF, it's my understanding that the versioning is handled by the
> linker.  At link time, the linker will see that the input object uses
> funC and that it's provided by libC, and that libC has that symbol
> versioned as 1.2, so the linker then rewrites funcC to funcC at LIBC_1.2.

True, but you can still use header file magic to affect that; you can
just make funcC reference a different underlying symbol or symbol
version depending on what variant you want.

> Symbol versioning is very useful for dealing with the flat namespace
> problem.  For example, consider an application that links against libA
> and libB.  libA links against libC.1 and libB links against libC.2. 
> Both libC.1 and libC.2 provide different versions of funC().  In a flat
> namespace without versioning, this situation would not work. funC at LIBC
> would collide.  ELF solves this by versioning the symbols in the global
> symbol list.  On OS X, we use a 2-level namespace, so versioning isn't
> necessary.

Interesting.  So, libA will reference "funC in libC.1" and libB will
reference "funC in libC.2", using a namespacing mechanism orthogonal to

- Josh Triplett

More information about the Xcb mailing list