RandR 1.4 restart

James Jones jajones at nvidia.com
Mon Mar 7 09:43:50 PST 2011


On Friday 04 March 2011 11:43:05 pm Keith Packard wrote:
> * PGP Signed by an unknown key
> 
> On Fri, 4 Mar 2011 16:47:45 -0800, James Jones <jajones at nvidia.com> wrote:
> > On 3/1/11 6:56 PM, "James Jones" <jajones at nvidia.com> wrote:
> > > On Tuesday 01 March 2011 08:02:24 Keith Packard wrote:
> > > *snip*
> > > 
> > >> Scanout pixmaps get resized automatically when the associated crtc
> > >> gets a new mode. This lets a compositing manager deal with the
> > >> scanout pixmap creation while separate screen configuration
> > >> applications deal with mode setting. Of course, internally, this is
> > >> done by creating a new scanout pixmap and pointing the existing XID
> > >> at the new object. (Hrm. I wonder if there are implications for
> > >> DRI2/GLX/EGL here...)
> > 
> > I looked over the rest of the spec changes, and spent some more time
> > thinking about how bad it would be for GLX/EGL to have pixmaps resize. 
> > It looks pretty heinous.  A lot of the badness comes from
> > texture_from_pixmap/EGLimage extensions.  If the pixmap is bound to a
> > texture or EGL image, the size is derived from that pixmap.  At least for
> > GLX pixmaps, whether you want to use non-power-of-two textures (with
> > mipmapping support), normal power of two textures (with mipmapping
> > support), or rectangle textures (no mip-mapping, different texel
> > coordinate system) depends on the size of the texture as well.  The
> > texture target is selected at GLX pixmap creation time, so just
> > re-binding the pixmap wouldn't be sufficient to update all state.
> 
> Remember that internally (at least at this point), the pixmap *isn't*
> getting resized -- a new pixmap is allocated and automagically assigned
> to the crtc as a scanout pixmap, and then the pixmap ID is switched from
> the old scanout pixmap to the new one. So, if you've created a resource
> which references the pixmap inside the server, things should 'just
> work'. Any client-side reference which ends up sticking the pixmap ID on
> the wire will not work though.

OK, I thought you wanted to completely hide the resize from applications.  If 
the intent is to force GLX clients to re-create their GLX pixmaps when the 
underlying X pixmap resizes, then the design sounds clean, and should work 
fine as long as no one goes in and tries to "optimize" the destroy/create out 
of the server code.  Is there some way to bake that into the protocol spec so 
such an optimization would be non-compliant?

It seems a little ugly to require clients to know RandR events can make 
certain GLX pixmaps no longer refer to the same resource they were created 
against, but this is admittedly a special case, and one that shouldn't break 
any existing applications.

> > As an alternative to resizing the scannout pixmap, could you organize the
> > protocol such that when a randr 1.4 composite manager is running,
> > requests to change to a pre-validated resolution are redirected to the
> > composite manager, based on something like SubstructureRedirectMask? 
> > The composite manager would then create a new crtc pixmap if necessary
> > and apply the mode. I haven't thought this through in great depth yet,
> > or discussed with anyone else at NVIDIA.  Error conditions might be hard
> > to get right in
> > particular.
> 
> I thought about this and it looked really ugly. Essentially, the entire
> mode set must be pend waiting for the compositing manager to deal with
> it. Which means either doing the whole redirection request, including
> packing up the entire mode set operation and handing it to the
> compositing manager to re-execute, or blocking the RandR application
> inside the server waiting for the compositing manager to reply.
> 
> The former would be a fairly significant change for RandR applications,
> requiring that they pend on an event before continuing. The latter
> suffers from the usual live-lock adventure if the RandR application has
> grabbed a server lock. At least the server lock isn't required any more
> as we've got the atomic 'do everything' RandR operation now.

Agreed, those problems do sound harder to overcome.  I hand-waved myself past 
error-handling and replies when I thought this suggestion through.
 
> I decided on the pixmap resizing as the least evil of these options, I'd
> love to know how to make this work cleanly with GLX pixmaps.

I don't know how to improve the interaction, but it sounds acceptable to me 
within the constraints I mention above.  An explicit "pixmap changed" event (I 
assume these resizes won't generate configure-notify on a pixmap some how...) 
might make things clearer, but it might be overkill.

I think mentioning the issue in the protocol updates would be valuable though, 
so future GLX/EGL/VDPAU-based composite manager coders don't need to dig up 
this thread to understand what's going on with their CRTC pixmaps.

Thanks,
-James


More information about the xorg-devel mailing list