RandR 1.4 restart
Keith Packard
keithp at keithp.com
Fri Mar 4 23:43:05 PST 2011
On Fri, 4 Mar 2011 16:47:45 -0800, James Jones <jajones at nvidia.com> wrote:
> On 3/1/11 6:56 PM, "James Jones" <jajones at nvidia.com> wrote:
>
> > On Tuesday 01 March 2011 08:02:24 Keith Packard wrote:
> > *snip*
> >
> >> Scanout pixmaps get resized automatically when the associated crtc gets
> >> a new mode. This lets a compositing manager deal with the scanout pixmap
> >> creation while separate screen configuration applications deal with mode
> >> setting. Of course, internally, this is done by creating a new scanout
> >> pixmap and pointing the existing XID at the new object. (Hrm. I wonder if
> >> there are implications for DRI2/GLX/EGL here...)
>
> I looked over the rest of the spec changes, and spent some more time
> thinking about how bad it would be for GLX/EGL to have pixmaps resize. It
> looks pretty heinous. A lot of the badness comes from
> texture_from_pixmap/EGLimage extensions. If the pixmap is bound to a
> texture or EGL image, the size is derived from that pixmap. At least for
> GLX pixmaps, whether you want to use non-power-of-two textures (with
> mipmapping support), normal power of two textures (with mipmapping support),
> or rectangle textures (no mip-mapping, different texel coordinate system)
> depends on the size of the texture as well. The texture target is selected
> at GLX pixmap creation time, so just re-binding the pixmap wouldn't be
> sufficient to update all state.
Remember that internally (at least at this point), the pixmap *isn't*
getting resized -- a new pixmap is allocated and automagically assigned
to the crtc as a scanout pixmap, and then the pixmap ID is switched from
the old scanout pixmap to the new one. So, if you've created a resource
which references the pixmap inside the server, things should 'just
work'. Any client-side reference which ends up sticking the pixmap ID on
the wire will not work though.
> As an alternative to resizing the scannout pixmap, could you organize the
> protocol such that when a randr 1.4 composite manager is running, requests
> to change to a pre-validated resolution are redirected to the composite
> manager, based on something like SubstructureRedirectMask? The composite
> manager would then create a new crtc pixmap if necessary and apply the mode.
> I haven't thought this through in great depth yet, or discussed with anyone
> else at NVIDIA. Error conditions might be hard to get right in
> particular.
I thought about this and it looked really ugly. Essentially, the entire
mode set must be pend waiting for the compositing manager to deal with
it. Which means either doing the whole redirection request, including
packing up the entire mode set operation and handing it to the
compositing manager to re-execute, or blocking the RandR application
inside the server waiting for the compositing manager to reply.
The former would be a fairly significant change for RandR applications,
requiring that they pend on an event before continuing. The latter
suffers from the usual live-lock adventure if the RandR application has
grabbed a server lock. At least the server lock isn't required any more
as we've got the atomic 'do everything' RandR operation now.
I decided on the pixmap resizing as the least evil of these options, I'd
love to know how to make this work cleanly with GLX pixmaps.
--
keith.packard at intel.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://lists.x.org/archives/xorg-devel/attachments/20110304/d6168966/attachment.pgp>
More information about the xorg-devel
mailing list