[Xorg] Re: Damage/Composite + direct rendering clients
Jim Gettys
Jim.Gettys at hp.com
Mon May 17 13:55:04 PDT 2004
On Mon, 2004-05-17 at 16:03, Andy Ritger wrote:
> > > 2) some damage occurs, composite manager sends composite request,
> > > additional rendering is performed, part of which the composite
> > > operation picks up, but the rest of the rendering is not
> > > composited until the next "frame" of the composite manager,
> > > and we see visible tearing.
> > >
> > > Consider this example: a translucent xterm partially overlaps
> > > glxgears. If the xterm is damaged, and the composite manager
> > > requests a composite, and then glxgears is updated (between
> > > when the composite request is sent, and when the composite
> > > operation is performed), then the part of the glxgears beneath
> > > the xterm will be composited this frame of compositing. Later,
> > > the composite manager will receive a damage event for glxgears,
> > > and will composite, causing the visible screen to be brought
> > > up to date. But in the period of time between the first and
> > > second composites, glxgears will tear.
> > >
> > > The above xterm+glxgears scenario is not limited to direct
> > > rendering clients. The same should be reproducible with any
> > > regular X rendering -- there is a race between when the
> > > composite manager retrieves the damage region(s), when it
> > > sends the composite requests, and any rendering protocol
> > > (or direct rendering) that is processed in between.
> > >
> > > It seems that the complete solution would be for the composite
> > > manager to perform an XGrabServer(3X11) before retrieving the
> > > damage regions, then send the compositing requests, and then
> > > XUngrabServer(3X11). Unfortunately, that seems very heavy
> > > weight. On the other hand, it may ensure faster compositing
> > > by effectively raising the priority of the composite manager's
> > > protocol while all other X clients are locked out.
> > >
> > > Some may be inclined to accept the tearing rather than pay
> > > the heavy weight operation of grabbing/ungrabbing around every
> > > compositing frame. For X clients, that may be OK, but I expect
> > > the tearing will be much more pronounced with OpenGL clients,
> > > because by nature they are more often animating.
> > >
> > >
> > > Perhaps the best solution is to introduce two new requests to the
> > > Composite extension: a "BeginComposite" and an "EndComposite" that
> > > composite managers would call, bracketing their compositing requests.
> > > The X server would dispatch these requests into the X driver.
> > > This would give vendors the flexibility to perform any necessary
> > > synchronization to protect against the above race conditions.
> >
> > My thoughts are coming at this from a different but related direction
> > than yours: it is the case of an application updating the state of its
> > window(s), to minimize flashing.
> >
> > The thoughts I've had on this topic is to use an XSync counter: if the
> > counter is even/odd, the contents of the window might be
> > stable/unstable. Incrementing a counter is very fast.
> >
> > This might also fold into XSync counters for vertical retrace, as per
> > the original XSync design/implementation (not implemented on Linux,
> > though recently some work has been started).
> >
> > A similar situation might be usable for DRI synchronization, giving
> > us a common synhronization framework, both for DRI synchronization and
> > for application update synchronization.
> >
> > I suspect some tweaks to XSync may be necessary to get all this to work.
>
> Thanks, Jim. That sounds interesting. So an app would increment
> a counter for a window, indicating the window is in flux, and then
> increment the counter again when it is done updating the window?
Yes. Exactly.
> Is this meant as a hint to the X server to note send any damage
> events for that window until the app indicates the window is in a
> stable state?
No, as Keith says, damage accumulates in the region in the X server
until the time you need to use the region. Any view a client has of the
damaged region is likely to be obsolete by the time it would get it.
So while it is possible for clients to get informed of damaged regions,
it isn't really the main-line case damage was designed for.
What you want to avoid is round trips; the damage accumlation allows
a client (say the compositing manager) to rerender with the accumulated
damage at the time it operates rather than having to wait for the
damage to be communicated to the client, avoiding a round trip.
>
> I'm not sure I see how to apply that idea to the #2 synchronization
> problem above... what did you have in mind?
The basic idea of XSync is the ability to block execution on a
connection until a counter gets to a given value. I don't think
as defined this instant that XSync has exactly what we need, but I think
the idea may have merit.
In the original XSync design, the idea was that vertical retrace would
be a pre-defined system counter, and on hardware without hardware double
buffering, you could then arrange to do operations during vertical
retrace, avoiding tearing (unless they took too long to complete,
and the scan caught up to the operation(s)).
Fundamentally, we want to have a set of operations that won't take place
until a certain event happens (the window's contents are stable again,
in this case, whether unstable from an application via X or DRI),
but will then take place.
Certainly, incrementing a counter could cause the
clients that are blocked on that counter to get a chance at the X
scheduler (it might do so already, but that code hasn't been seriously
looked at for a long time, and may not be semantically guaranteed by
XSync's spec right now).
What we need in this case is just that the compositing manager get a
chance to run when the machine is idle, or at least once in a blue moon.
The big issue is to prevent starvation by applications driving either
the X server or the graphics engine flat out preventing occasional
(60HZ) updates to the eye candy.
It would be good if we can end up with one general mechanism for
synchronziation, to avoid alot of ad-hoc mechanisms, if we can.
I need to go back and swap in my knowledge of XSync, which is about a
decade old.
- Jim
--
Jim Gettys <Jim.Gettys at hp.com>
HP Labs, Cambridge Research Laboratory
More information about the xorg
mailing list