[PATCH 2/2] DRI2: Add error message when working around driver bug

Mario Kleiner mario.kleiner at tuebingen.mpg.de
Thu Oct 28 11:20:08 PDT 2010


On Oct 28, 2010, at 6:02 PM, Jesse Barnes wrote:

> On Thu, 28 Oct 2010 18:47:09 +0300
> Pauli Nieminen <ext-pauli.nieminen at nokia.com> wrote:
>>> Most of what you have in (b) is pretty straightfoward; even the  
>>> shared
>>> drawable case shouldn't be too bad, since each X connection could  
>>> have
>>> bits indicating whether the counter has been picked up after a CRTC
>>> move.
>>
>> One option would be adding crct id parameter to calls.
>>
>> glXGetMscBaseRateOML would return rate, base msc and pipe id where  
>> this msc
>> value is valid. Now all MSC calls would take the returned pipe id as
>> parameter. If pipe id doesn't match current crtc any more then  
>> call would
>> fail.
>>
>> This would allow complex applications to pass same pipe id to  
>> different
>> context.
>>
>> Negative side is that API would have to be changed to include extra
>> parameter.
>
> Yeah that would be a good extension though; we may as well expose the
> fact that different display pipes exist on the system, and have
> corresponding MSCs.  Old applications using SGI_video_sync or existing
> OML behavior would work like they do today (with an MSC value that may
> jump, which we could fix with the virtualize count), and new ones  
> would
> be pipe aware.
>

I also like option b) the most, define new spec and api instead of  
working around the old specs limitations.

Another way of doing it, in the same spirit, would be some generation  
counter. Starts with 1, increments each time that something changes  
in the configuration which could influence the display timing and  
mess with the schedule the app had in mind. If the drawable changes  
crtc's. If the crtc changes its video mode (esp. refresh rate) or  
configuration (mirrored/extended desktop, synchronized to other  
crtc's etc.), dpms change etc.

A get call could return the current count, other calls could return  
the count that was valid at time of their processing. E.g.,  
intel_swap_events could code more info like generation count, crtcid.

Apps could pass in the count to the msc related functions and those  
would fail on mismatch to the current count. One could also have one  
special "don't care" value (e.g., 0) that says "I don't care about an  
isolated glitch, because i'm not prepared to handle this anyway. Just  
do something to make sure i don't hang, e.g., fall through a blocking  
glXWaitMscOml() call or swap the buffers immediately on  
glXSwapBuffersMscOML()".

I'm also for exposing rather more than less information, like pipe  
configuration, or the ability for the app to decide what it wants, e.g.,

* If the drawable covers multiple crtc's, or is in mirror display  
mode, should one of them define when to swap and the other should  
show tearing, or should each of them sync its swap separately, which  
looks nice, but can throttle redraw rates or possibly exhaust  
resources if the crtc's run and largely different rates.

Windows has the concept of a "primary display" which defines the swap  
timing on extended desktops. The non-primary display just shows  
tearing. Mac OS behaves similar, except that you don't have control  
over which is the primary display, and some (sometimes buggy)  
heuristic decides for you and gives you the fun of working around it  
by replugging monitors and other fun things. I like control.

The current intel and radeon ddx in page flipped mode will swap each  
crtc separately. Tear free, but with throttled framerate, as swap  
completion == swap completion of the last involved crtc. This btw. is  
a problem for the returned timestamps and timing if the crtc's run at  
different refresh rates, as the app doesn't know to which crtc the  
swap completion timestamp belongs. And it changes over time. For  
blitted copy-swaps you get tearfree on the assigned crtc for a  
drawable and tearing on the other one.

Another approach would be to define swap times in system (gettimeofday 
() time). Specify a swap deadline tWhen and the system tries to swap  
at the earliest vblank with a time tNow >= tWhen. Then one doesn't  
have to care too much about changes in msc rates. The  
NV_present_video <http://www.opengl.org/registry/specs/NV/ 
present_video.txt> extension does something similar for presentatio  
of video buffers. My own toolkit does this as well and for user code  
it's a natural way to specify presentation times, especially if it  
has to synchronize presentation with other modalities like sound,  
digital i/o, eye tracking etc. My code just uses the  
glXGetSyncValuesOML() call to translate a user-provided system time  
tWhen into a corresponding target_msc for glXSwapBuffersMscOML().

When we're at defining new api (christmas time is coming, i got lots  
of wishes), a new swapbuffers call could also define what to do if a  
presentation deadline can't be met. E.g., instead of a delayed swap  
it could drop the swap and completely skip a bufferswap request to  
get presentation timing back on schedule, so skipped frame errors  
can't accumulate. Something like that could be interesting for video  
players in combination with n-buffering. A player could queue up  
multiple frames and tell the implementation what to do on frame  
skips. Occassional skipped frames may be mildly annoying, but losing  
audio-video lip-sync is much worse.

Some of this may need new interfaces to the kernel drm, but long term  
we need them anyway. E.g., ioctl()'s for true 64-bit vblank counts  
and targets. Or in the case of amd's evergreen gpu's for crtc  
selection. They have up to 6 crtc's, but the vblank ioctl() only  
allows to select between 2 crtc's.

Gee, sorry, i should stop writing essays,
-mario



More information about the xorg-devel mailing list