Putting a pixmap into a window every frame

Carsten Haitzler raster at rasterman.com
Mon Aug 23 21:24:20 UTC 2021


On Mon, 23 Aug 2021 14:34:58 +0100 "Andrew Bainbridge" <andy at deadfrog.co.uk>
said:

> Great reply. Thanks for taking the time.
> 
> Carsten Haitzler wrote:
> > I now can only talk
> > about mine and it does NOT do the above. There is a parent frame window but
> > it is identical in size to the client. It's just used for control. The
> > frame/border is drawn inside the compositor itself...
> 
> Very sensible. If other people here know if other WMs work this way, I'd be
> interested to know.
> 
> > even with Xpresent, you will be doing a copy from your
> > X(shm)Putimage to the pixmap, THEN presenting that pixmap (maybe will be
> > another copy - details above). So on a best case basis you have just as many
> > copies as going directly to the window, at worst it may be 2x the copies.
> > Admittedly the copies here will probably be on-GPU as opposed to the
> > PutImage which will be a CPU -> GPU copy. So there still may be a copy and
> > this still may have tearing happen
> 
> This is the paragraph that straightened out all my misunderstandings. Thanks!
> 
> In one of Keith Packard's presentations, he shows a modified xeyes using the
> Present extension to draw without tearing. But I think he says compositing is
> disabled. I thought this was just because it was a work-in-progress demo. But
> you are telling me that even now the system is fully implemented, the Present
> extension gives no guarantee of tear free rendering if there is a compositor.
> Have I got that right?

OH sorry - a copy to the window (x(shmput)image) may tear,. the present
extension should not tear if done right, BUT if the wm reparents in a frame
window the path will be a copy - that copy by xpresent will happen on the
xserver-side. the compositor will use whatever pixmaps it sees and rwap with an
opengl texture and when it uses opengl to render this is done on the
client-side so the xserver may be in the middle of this copy while the gpu
happens to use this as a source texture, thus tearing may happen. you'd need to
have both opengl on the client side when it binds and then uses that texture it
its opengl pipeline hold a lock on that pixmap while this is in flight... i
don't think this is happening. BUT if the scheme above with the frame being
drawn by compositor in the compositor rendering is done with client pixmap
buffer == just client window then this can devolve to a buffer exchange and
thus be either A or B. This is why we chose to do the compositing and
reparenting and rendering this way - it's the best chance of this becoming a
tear-free exchange IF the client uses opengl or xpresent or xdbe. the resort of
the work then is on the xserver/driver side to ensure this actually happens.
clients (apps and compositor/wm) have done all they can to make it work.

> > In theory you could allocate your own DMABUFs and use DRI2 protocol -
> > software render into the mmaped dmabuf then show it like opengl does.
> 
> Sounds like a fun challenge. I might attempt that one day.

:) FYI we do this for software rendering with wayland - if possible we try
allocate some buffers with drm then rendering into them (wayland is a
different display protocol and it does work entirely around the idea of buffer
swapping and clients sending buffers to the compositor to then display).

I have toyed with the idea in x11 of mimicing this with clients using x client
messages + an x property on a window to say "please use this pixmap id now for
display". it's like xpresent but explicitly sending multiple pixmap id's to the
compositor to go use (some locking or ownership rules around this) and
then tearing can be eliminated. it'd need a compositor to create this extension
and then clients to catch up. at least in my world i also work on a client-side
toolkit (EFL) so i can also modify that to do this too and thus at least have
some apps benefit from this. if others follow suit then it may become even more
useful (eg what you are doing).

> > As for waiting for compositor to be ready - you can't do that. You don't
> > know when the compositor will consume your pixmap and updates and even if
> > it will consume it at all. It may choose not to update/render your window
> > (it's hidden, it may be dropping down to only rendering every 4th frame or
> > something). The best thing for you to do is either render with a fixed
> > timer (eg at 60hz) do that on your side, open /dev/dri/card0 and try get
> > vblank events (use libdrm to do this), or probably a bit better is to use
> > the xpresent (XPresentNotifyMSC() to request events for screen refreshes).
> 
> If I understand correctly, here you list a bunch of reasons I should get the
> compositor to tell me when it wants a frame, and at the same time you tell me
> it isn't possible. :-(

no compositor will tell you - they don't do such a thing. wayland on the other
hand does do this... :) there is no protocol surrounding x compositing (that i
know of) where a client can say "send me messages when you want a frame". i
could build one. it's not too hard. all the stuff is in the compositor already
to handle this (well enlightenment can), but i'd be creating a new protocol
over x11 to do this. haven't done it at this stage so .. nothing for you to go
try and write as no one will be listening on the other end. of course ... all
of this has to start somewhere. some enterprising wm/compositor author writes
this and maybe ports some sample clients to show it - maybe adds it to a
toolkit or 2, documents it and then you can use it, but only if it exists. you
still need a fallback for when it does not so the above is what you should do
anyway for the base case. basicaly first implement the path that will ALWAYS
work (xputimage + local timer). then begin to add the improvements (detect if
xshm will work to speed up the writes to the window, then  try xpresent to see
if the extension exists and ask it for vsync events to draw with, if not then
try libdrm + vblank events etc. and also maybe putimage to xpresent backing
pixmap then present it - hopefully less tearing but probably extra copies -
maybe detect your compositor and only do this if you know the compositor being
used allows xpresent to avoid copies etc.).

> At least it means I don't have much work to do - my library already does
> XPutImage() and waits for the vblank event as you describe. I just need to
> add the SHM interface.

shm will be a huge step/improvement. you're pretty much in a good state with
that. TBH at this point I think you should look into opengl instead. it'll buy
you at least the same swap path as all opengl rendering and gpu acceleration
too. combine this maybe with xpresent too for vblank events too as an option
and you're then pretty much about as good as it'll get.

> Is there a reason xpresent events would be better for me than the vblank
> event?

multiple screens. handling them with vblank events with libdrm is some fun.
also handling screen blank and unblank and hiccups for the vblank events then
etc. - i've had to do some workarounds broken dri devices over the years with
these events (eg detect that they don't send out vblank events when you ask and
have timeouts then when it takes too long, fall back to a cpu-side timer etc.).
xpresent should hide the multi-screen problem from you rather than make you
deal with it... at least.

-- 
------------- Codito, ergo sum - "I code, therefore I am" --------------
Carsten Haitzler - raster at rasterman.com



More information about the xorg mailing list