Backing out DRI2 from server 1.5
krh at bitplanet.net
Tue Aug 5 12:54:15 PDT 2008
Oops, just realized that my reply didn't go where I thought it did...
On Tue, Aug 5, 2008 at 2:11 PM, Keith Packard <keithp at keithp.com> wrote:
> On Tue, 2008-08-05 at 08:39 +0100, Alan Hourihane wrote:
>> It seems as though the memory managers are going to be driver specific
>> at this time, so we can't have the Xserver relying on a specific one.
> Yup. There wasn't any reason to make DRI2 TTM-specific, aside from the
> desire to eliminate a bunch of otherwise unneeded DRM api. I think using
> TTM or GEM objects to hold data shared only between Linux processes is
> not a good idea anyway.
>> Maybe we should have some callbacks to the driver for DRI2 specific
>> handling ?
> The only TTM/DRI2 interaction was in the allocation of the shared area.
> Using shm_open or the old drmAddMap API will make it MM-independent
> without requiring any callbacks. I don't know which one Kristian is
> planning on using today.
I've just started to look into this again, and while the main change I
want to do is to make it memory manager agnostic, there's a couple of
other things I'd like to change at this point:
1) with DRI2. I kept the buffer swap in the client since I didn't
want to incur a server request to do this. This decision meant that
we had to keep much of the complexity for synchronizing clip rects
between server and DRI clients in place. What I realized in the mean
time is that we always send a few requests to post damage after each
swap buffer, so introducing a DRI2 request to do swap buffers and post
damage shouldn't affect performance but will make everything much
simpler. This will also eliminate the need for the DRI lock, which
for DRI2 was only used to synchronize access to cliprects.
2) Now that we don't need to communicate cliprects to the DRI
clients, the somewhat complex DRI2 sarea and event buffer becomes a
little harder to justify as we only use this to detect changes in
attached buffers. George's swrast DRI driver uses a simpler approach
there: he hooks the dd_function_table::Viewport function and asks the
loader for the drawable size. I'd like to do something similar for
DRI2, which will completely eliminate the need for the sarea. The
DRI2 DRI driver will ask the loader (libGL, which will forward the
query over protocol or AIGLX, which will ask the DRI2 module directly)
for the dimensions and memory manager buffers backing the current
drawable. This costs a roundtrip, but this was part of the old design
too and inherent in GLX, in that multiple DRI clients need to agree on
the memory manager buffers backing the aux renderbuffers. Thus you
need to go to the X server one way or the other.
3) Let the DDX driver allocate the auxillary buffers. I went back and
forth on this a bit and in some sense it's an arbitrary decision: both
the DDX and the DRI drivers know enough about the hardware to allocate
buffers with the right stride/tile/etc properties. Doing it in the
DDX means that the DRI driver need to tell the DDX driver what buffers
to allocate (using the DRI2CreateWindow), but on the other hand it
avoids tricky allocation races with multiple DRI clients rendering to
the same drawable. And without the sarea, doing it in the client
would incur an extra round trip: you would first have to ask the
server about the drawable size, then allocate and tell the server
about the buffers you allocated. This lets the DDX driver implement
special cases such as allocating a full screen back buffer that has
the right properties to be used as a scan out buffer for page flip
cases. Which in turn becomes a lot simpler when the buffer flip
happens in the X server. And for redirected windows, the back buffer
can be another pixmap so that buffer flips can be implemented as
setting a different window pixmap.
This all sounds like a lot of work, but it's mostly simplifications
and I expect to make some good progress towards it this week. In the
mean time I'll drop the dri2 bits from the xserver 1.5 and mesa 7.1
More information about the xorg