Should Weston wait for client buffers to finish rendering?

Singh, Satyeshwar satyeshwar.singh at intel.com
Wed Jan 23 18:43:52 UTC 2019


Hey guys,
As you know, Weston doesn't wait for client buffers to finish rendering. That is typically left as an exercise for the kernel mode graphics driver. I am wondering if anyone knows why this policy decision was made? More importantly, is there any harm (or any side effect) that I am not thinking of if this policy were to change such that the compositor indeed started waiting for client buffers to finish rendering first and only used those buffers for composition that had finished?
Imagine a benchmark case where the client renders for example 800 frames and attaches their buffer ids to a surface, the compositor uses the last one that came in before its repaint cycle started for composition and display on the screen. This buffer may not have been rendered by the GPU yet because it is working on previous buffers. However, it may not finish before the next Vblank and if it doesn't finish, then the compositor's scan out buffer also isn't going to be displayed by the kernel driver. If we change the policy for the compositor to always use the last finished buffer, then at least the compositor's scan out buffer will be displayed for the next Vblank even if it's not showing the last frame from the client.
Thoughts?
-Satyeshwar

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/wayland-devel/attachments/20190123/b7c58a32/attachment-0001.html>


More information about the wayland-devel mailing list