stripping off "xf86-*-" from drivers

Luc Verhaegen libv at skynet.be
Sun Jan 20 21:00:39 PST 2008


On Sun, Jan 20, 2008 at 06:55:36PM -0500, Bernardo Innocenti wrote:
> Luc Verhaegen wrote:
> 
> >>Even so, I really like Dave's idea.
> >
> >Why? Why can anyone agree with "Let's go monolithic again so that i 
> >still do not have to bother in any way about being slightly backwards 
> >compatible".
> 
> Because less backwards compatibility constraints result
> in more useful development.

Backwards compatibility does not have to stop innovation. Neither has 
in reality got anything to do with eachother. It is perfectly possible 
to add new functionality while still keeping some old functionality 
going in a different build environment.

All it usually takes is a slight bit more tought.

All it takes is the willingness to do such things.

> >>The input driver breakage that was measured in years would never have
> >>happened if all the drivers were in the tree and typing "make" would
> >>show everything that breaks from an API change.
> >
> >libpciaccess.
> >
> >I don't think there has been anything worse than this.
> 
> Putting blame on Ian Romanic for breaking legacy drivers
> such as sunleo and chips is, to say the least, unfair.
>
> If you cared about one of the broken drivers for any reason,
> you should have posted patches like David Miller did.
> Preferably around the same time the pci rework was being
> done, working in a branch.

I was the person who went into atimisc, and fixed up its probe routine 
so that it would use pciids sanely. This at a time when nobody dared to 
go even near it.

> If we agree that cleanups/refactorings such as the
> pci-rework one are generally useful, then we shouldn't
> discourage whoever volunteers to do them by putting
> excessively strict conditions on them.

This is not excessively strict. This is normal. You break it, you fix 
it, that's how it goes everywhere.

> As my boss often says, "perfect is the enemy of good".

Good is not breaking everything, and then running away.

> >This is one part of the solution. Tinderbox should be revived, and those 
> >who commit stuff to master or a release branch need to get blamed for 
> >any build breakage. This will solve a lot of the issues already. A lot, 
> >but not all.
> 
> I agree with the first part of your proposal: a regression
> testsuite, along with a functional tinderbox, is an invaluable
> tool to monitor regressions and track them to individual
> patches.
> 
> I strongly disagree on the notion that we should blame
> hard-working people whenever they cause intentional or
> unintentional breakage: some amount of care is certainly
> required, and a good review process would gratly help...

I do not think there was any review process, and the way this discussion
is going i do not think that any review process is going to be installed
either.

> *BUT* those negative feedback practices are known to engulf
> development by putting more blame exactly on those who would
> otherwise become the most active and bold contributors.

Activity and boldness are such very subjective measures.

Often such activity and boldness tend to be measured by the size of the 
statements one makes on irc.

> Many years ago, I've been working in a popular OS project
> with this same total-stability attitude.  It has now been
> surpassed in popularity and almost killed off by the maybe
> imperfect, but very dynamic Linux.
> 
> Excessive stability is death :-)

Excessive instability is death as well. This here is about being more 
balanced, not about being all to one side, whichever side that is.

> >Right. This is the real problem. People are only developing for the tip 
> >of everything else, instead of trying to be slightly backwards 
> >compatible, where possible. Where this is not possible, a choice needs 
> >to be made, either do not compile the new features that have a new 
> >dependency, or to stop building against this.
> 
> I find Xorg *extremely* backwards compatible compared
> to anything else I've ever seen.
> 
> We only drop features years after they've been deprecated even
> if there's no known code out there using it.  The wire protocol
> has remained compatible for over 20 years.  We still support
> pre-xkb keyboards just for the sake of some obscure UNIX vendoer
> which has not even bothered to update their keyboard driver in
> 10 years.  We have had *working* drivers for some ISA chipsets
> until recently.  We're still considering pre-C99 compilers and
> even leave the #ifdefs for the Cray around!
> 
> Is this amount of backwards compatibility insufficient?

Backwards compatibility is not about doing it in certain places and 
not doing it in others. It is about doing it for a given set of versions 
for your dependencies, as a whole. It is about a slice of time, not 
about a slice of space.

> >But in any case, it pays to have a window of compatibility. This way 
> >slightly older servers can benefit from the hardware support new drivers 
> >offer. Or this way the server can still build even though the system 
> >doesn't have some unreleased version of some library yet.
> 
> Granted... maybe the pci rework could have used a few
> wrappers in the server.  I don't know if it was technically
> practical, but I guess Ian would have considered it.
> 
> 
> >Twini knew this for his sis driver, which, at the time, was buildtime 
> >compatible all the way to xfree86 4.2.0. My unichrome code did so up to 
> >4.3.0 (debian sarge). And now radeonhd goes back to X.org 6.9, and if we 
> >adjust a buildflag, earlier too. With only a minimal amount of work.
> 
> Whoever is running a museum is free to backport drivers to
> xfree86 4.2 and even make them work on the Cray.  These things
> are nice to have and welcome in the tree, but not as a drag
> for people working on new features for the remaining 99.9%
> of the user base.

Twini gave up on X development in late 2005. As for me; 4.3 is used on 
debian sarge, which will be fully deprecated later this year or early 
next year. I just do not have a reason to go in and remove support for 
4.3, as it is not intruding on anything.

As said, a small bit of extra work, kept up over many many years. It 
really is trivial, it just requires that one sees it as a worthwhile 
task. 

But i guess one actually should've done such work to realise how easy it 
really is. And i cannot expect people to just give up on unfounded fears 
just because somebody with relevant experience says so.

> >For drivers, 6.9 is not too insane either, it is actually still being 
> >shipped and its use is still widespread. Giving it newer hardware 
> >support, even with reduced functionality, is a must. And even with the 
> >reduced functionality, it always beats running vesa.
> 
> A must for whom?  If there were crazy people who try to plug a
> brand new video card in a computer with an OS from 2005
> and expect it to magically work, it is their problem, not ours.

Who do you work for, and do you ever expect to work for a linux vendor 
who actually does enterprise support?

But not that that really should be the reason, both twini and i didn't 
care about enterprise lifecycles. We just wanted our drivers to be 
useful for everybody, and we quickly found out how little work that 
really took.

> Ironically, they could easily upgrade the monolithic Linux
> kernel to support any other piece of hardware.  But for a
> graphics card, somehow they need to update just the driver
> without touching the X server.  Wouldn't it be easier for
> both the users and the developers if they could just update
> the whole thing?

Which other bits does the xserver depend on? Are there hard version 
dependencies here, or does the server have a window of compatibility?

And where does this lack of compatibility stop? Right above libc and the 
kernel i guess.

The driver is that bit over which the driver developer has control. And 
if the driver developer does the work needed to make his driver build 
compatible between most of the servers out there, then he can satisfy 
most of his users, without giving them a massive headache.

And, in my experience, when applying good judgment, it does not require 
much time to keep a driver compatible.

But it seems very clear that only few have had this experience.

> The X wire protocol is already a natural barrier where backwards
> compatibility can be indefinitely maintained with a small cost.
> Very similar to the kernel syscall interface.

Heh. I fail to see why the wire protocol is even remotely valid here.

Luc Verhaegen.
SUSE/Novell Driver Developer.



More information about the xorg mailing list