Proposal for per-CRTC pixmaps in RandR

Keith Packard keithp at keithp.com
Fri Mar 12 13:46:41 PST 2010


On Fri, 12 Mar 2010 12:47:13 -0800 (PST), Andy Ritger <aritger at nvidia.com> wrote:
> 
> Hi Keith, sorry for the slow response.

Thanks for reading through my proposal.

> However, you also mention overcoming rendering engine buffer size
> constraints.  This proposal seems unrelated to the size of the buffer
> being rendered to (I imagine we need something like shatter to solve that
> problem?)

Right, it isn't directly related to the rendering buffer size except as
it applies to the scanout buffer. Older Intel scanout engines handle a
stride up to 8192 bytes while the rendering engine only goes to 2048
pixels. At 16bpp, the scanout engine could handle a 4096 pixel buffer,
but because you can't draw to that, we still need to limit the screen
dimensions.

> Also, if part of the goal is to overcome scanout buffer size constraints,
> how would scanout pixmap creation and compositing work during X server
> initialization?  The X server has no knowledge during PreInit/ScreenInit
> that a composite manager will be used, or whether the composite manager
> will exit before the X server exits.

I had not considered this issue at all. Obviously the server will not be
able to show the desired configuration until the compositing manager has
started. I don't have a good suggestion for the perfect solution, but
off-hand, the configuration could either leave the monitor off, or have
it mirror instead of extend the desktop.

Other suggestions would be welcome.

> If the initial X server configuration requested in the X configuration
> file(s) specifies a configuration that is only possible when using
> per-scanout pixmaps, would the X server implicitly create these pixmaps
> and implicitly composite from the X screen into the per-scanout
> pixmaps?

No, that wouldn't be useful -- our drawing engine could not draw to the
X screen pixmaps.

> While we're doing that, we should probably also add a mechanism for
> clients to query if the 'multi mode' is valid.  This would let savvy
> user interfaces do more intelligent presentation of what combinations
> of configurations are valid for presentation to the user.

Yes, that seems like a reasonable addition.

> In the case that the pointer sprite is transformed, I assume this would
> still utilize the hardware cursor when possible?

Yes, the affine transform for the sprite would be applied before the
hardware cursor was loaded, just as we do today with the rotations. This
makes the desired transform controlled by the client so that they can
construct a suitable compromise that works with the projective transform
for each monitor.

> Creating a window the size of the scanout pixmap and then plugging
> the scanout pixmap in as the window's backing pixmap feels a little
> backwards, sequentially.  If the intent is just to give the scanout
> pixmap double buffering, you should be able to create a GLXPixmap from
> the scanout pixmap, and create that GLXPixmap with double buffering
> through selection of an appropriate GLXFBConfig.  However, the GLX spec
> says that glXSwapBuffers is ignored for GLXPixmaps.

Precisely. The window kludge is purely to make existing GL semantics
work as expected.

> However, a more flexible solution would be to provide a mechanism to
> create a scanout window (or maybe a whole window tree), rather than a
> scanout pixmap.  The proposed scanout window would not be a child of
> the root window.

All windows are children of the root window, and we already have a
mechanism for creating windows which are drawn to alternative
pixmaps. Using this mechanism offers a simple way to express the
semantics of the situation using existing operations.

The goal is to really get applications focused on the notion of pixmaps
containing pixels and windows as being mapped to pixmaps.

> A composite manager could then optionally redirect this scanout window
> to a pixmap (or, even better, a queue of pixmaps).  The pixmaps could
> be wrapped by GLXPixmaps for OpenGL rendering.  If we also provided
> some mechanism to specify which of the backing pixmaps within the queue
> should be "presented", then an implementation could even flip between
> the pixmaps.

The existing GL window semantics seem to provide what is needed for a
compositing manager today, I don't see a huge benefit to switching to
pixmaps.

> I think this lines up with some of the multi-buffer presentation ideas
> Aaron Plattner presented at XDC in Portland last fall.

We can discuss multi-buffering of applications separately; let's focus
on how to fix RandR in this context.

Thanks very much for the comments; from what I read, I believe we need
to add a request to test whether a specific configuration can work
without also setting it at the same time. I would like to avoid needing
to enumerate all possible configurations though; that gets huge. I
welcome additional thoughts on how to make this kind of information
available to applications.

-- 
keith.packard at intel.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://lists.x.org/archives/xorg-devel/attachments/20100312/12c219e2/attachment.pgp>


More information about the xorg-devel mailing list