Client persistence across GPU hotexchange
ppaalanen at gmail.com
Thu Sep 18 00:47:42 PDT 2014
On Wed, 17 Sep 2014 21:11:53 +0400
Kosyrev Serge <_deepfire at feelingofgreen.ru> wrote:
> Good day, folks!
> I'd like some light to be shed on the general area of GPU
> hotplug/hotremoval, and in particular, on how this is supposed to affect
> client persistence.
> What is the state of multi-card output?
Which software components do you want to know about? In Weston it is
not implemented, only one card gets used per running Weston instance.
> Is it possible to have drm_outputs replaced with (non-GL) clients surviving?
> A specific scenario I have in mind is:
> - weston initially starts with an accelerated DRM device (#1)
> - some fairly pedestrian (plain GTK/Qt) clients start
> - a dumb framebuffer DRM device (#2) is hot-added
> - the accelerated device (#1) is removed
Weston does not support any hot-add or removal at the moment.
Do you mean that the clients actually use hardware acceleration?
If they don't, I don't see any problem for the clients as there is
nothing to be communicated or reinitialized. It would be purely a
compositor thing. But that is not an interesting case, so let's assume
acceleration was indeed used.
> Is it possible to even express this with the current infrastructure?
I'm not sure, I suppose EGL had some things to signal that the context
is lost, but I don't know if that is enough.
It is up to apps/toolkits to handle the loss of context.
> What is the estimate for the survival of the clients, if so?
I would guess zero right now.
I believe a more pressing matter is to support multiple GPUs in the
first place through the whole stack, before starting to hot-plug them.
A part of that problem is how the details of passing buffers between
different GPUs are solved, which I think is still a somewhat open
question. Dmabuf can pass bags of bytes, but the interpretation details
and synchronization are unsolved or WIP, AFAIK.
Sorry I couldn't be more helpful.
More information about the wayland-devel