stripping off "xf86-*-" from drivers
Bernardo Innocenti
bernie at codewiz.org
Sun Jan 20 22:31:52 PST 2008
Luc Verhaegen wrote:
> On Sun, Jan 20, 2008 at 06:55:36PM -0500, Bernardo Innocenti wrote:
> Backwards compatibility does not have to stop innovation. Neither has
> in reality got anything to do with eachother. It is perfectly possible
> to add new functionality while still keeping some old functionality
> going in a different build environment.
>
> All it usually takes is a slight bit more tought.
>
> All it takes is the willingness to do such things.
It's not that easy.
The Linux USB stack was nearly rewritten twice, along
with all its drivers. The 802.11 stack was recently
replaced, and all drivers ported to the new one.
I think it's the second time it happens to the wi-fi
stack, actually.
On Windows, for obvious reasons neither USB nor 802.11
could ever be rewritten, and it shows. At least the
USB is said to be terribly inefficient and limited.
If you meet some Windows driver developers, offer them
a beer and ask to hear their amusing horror stories.
Things like the pci-rework can simply never happen on
systems where the ABI needs to remain stable indefinitely.
Actually, I lied: even Windows got to break its driver
API completely a few times: .sys, .i386, .vxd, .mdp...
and a few others I missed. Look: they seem to break all
the drivers on every major OS release!
Hmm... I guess it's not been exactly "a slight bit
more thought" for them :-)
>> As my boss often says, "perfect is the enemy of good".
>
> Good is not breaking everything, and then running away.
Touche' :-)
> I do not think there was any review process, and the way this discussion
> is going i do not think that any review process is going to be installed
> either.
We say each patch must be reviewed before being committed
in the wiki. If people are not following this practice,
we should either yell at them or decide we do not care to
do code revies and update the documentation.
In the past I've got very little success trying to get my
patches reviewed on this list. I've been quite persisting,
but nobody would care (except Michel Dänzer in one case).
I can understand how at some point one would loose faith
and start committing to git directly.
Occasional contributors just stop sending patches instead,
which may partially explain why there's such a big disparity
in number of developers between Xorg and the Linux kernel.
So, if you care about code reviews, the first move could
be volunteering to do them systematically for the specific
areas where you feel qualified.
> Activity and boldness are such very subjective measures.
>
> Often such activity and boldness tend to be measured by the size of the
> statements one makes on irc.
Come on... I know few people who had the guts of reworking
the internals of the X server, and Ian was one of them.
I guess it's not such a rewarding and joyful exercise :-)
>> Excessive stability is death :-)
>
> Excessive instability is death as well. This here is about being more
> balanced, not about being all to one side, whichever side that is.
Agreed. Oh! It seems we've accidentally re-discovered
Kauffman's law of life at the edge of order and chaos.
I'm not shocked at all to see it also applies to software
projects.
> Twini gave up on X development in late 2005. As for me; 4.3 is used on
> debian sarge, which will be fully deprecated later this year or early
> next year. I just do not have a reason to go in and remove support for
> 4.3, as it is not intruding on anything.
>
> As said, a small bit of extra work, kept up over many many years. It
> really is trivial, it just requires that one sees it as a worthwhile
> task.
>
> But i guess one actually should've done such work to realise how easy it
> really is. And i cannot expect people to just give up on unfounded fears
> just because somebody with relevant experience says so.
I have nothing against keeping simple, non intrusive
backwards compatibility bits (although backporting
patches is also possible).
Not the case in pci-rework, MPX, XACE, etc.
But if you think it could be done *easily* also there,
you should say so when the next jumbo patch appears
on the list for review.
Ok, first we'd have to re-establish the practice of
properly reviewing code, or the nasty patches will
continue to land undisturbed on the tree under our
noses :-)
> Who do you work for, and do you ever expect to work
> for a linux vendor who actually does enterprise support?
The idea of maintaining legacy drivers for an enterprise
distribution never occurred to me, but I think I'd
do it by backporting critical fixes rather than trying
to get the git snapshots to build on legacy X servers.
> But not that that really should be the reason, both twini and i didn't
> care about enterprise lifecycles. We just wanted our drivers to be
> useful for everybody, and we quickly found out how little work that
> really took.
So I think we have a solution to the driver breakage
problem: you're willing to maintain the old drivers when
they break, and you find it takes a little amount of
work to do so.
> Which other bits does the xserver depend on? Are there hard version
> dependencies here, or does the server have a window of compatibility?
I've found that the X server has surprisingly few deps.
That is, until we suddenly made it depend on openssl a
few months ago! :-)
Seriously, I think you'd be fine with a bunch of protos,
libXfont, libdrm, and now of course also libpciaccess.
> And where does this lack of compatibility stop?
> Right above libc and the kernel i guess.
It is *much* easier to maintain compatiblity there,
because those APIs are mere collections of functions
with few data structures exchanged across the fence.
Yet, this still comes with a cost in terms of performance
and extensibility: we'll have to live forever with many
thread unsafe libc functions and a bunch of dead syscalls.
Xorg drivers need to mess around several large data
structures of the core server. How can we possibly
maintain binary compatibility while at the same time
evolving them independently?
Maybe possible, but not easy or convenient. Exactly the
same problem depicted in Documents/stable_api_nonsense.txt.
> The driver is that bit over which the driver developer has control. And
> if the driver developer does the work needed to make his driver build
> compatible between most of the servers out there, then he can satisfy
> most of his users, without giving them a massive headache.
>
> And, in my experience, when applying good judgment, it does not require
> much time to keep a driver compatible.
>
> But it seems very clear that only few have had this experience.
The "good judgment" part is what I don't trust. The amount
of judgment required to do a good job involves seeing into
the future.
How could the 802.11 hackers foresee mesh networking?
How could the XAA designers foresee 256MB framebuffers and
the Porter-Duff rendering model that would be adopdted
10 years later?
I could google for a dozen of threads from the LKML
where this same argument was discussed up to the
conclusion that the only viable option is not even
trying to publish a stable API for modules.
>> The X wire protocol is already a natural barrier where backwards
>> compatibility can be indefinitely maintained with a small cost.
>> Very similar to the kernel syscall interface.
>
> Heh. I fail to see why the wire protocol is even remotely valid here.
That was to say that if we replace the X server altogether
to refresh the drivers, the clients would not be disturbed.
--
\___/
|___| Bernardo Innocenti - http://www.codewiz.org/
\___\ One Laptop Per Child - http://www.laptop.org/
More information about the xorg
mailing list