[virglrenderer-devel] multiprocess model and GL
Frank Yang
lfy at google.com
Sat Jan 18 00:06:16 UTC 2020
On Fri, Jan 17, 2020 at 3:41 PM Chia-I Wu <olvaffe at gmail.com> wrote:
> On Thu, Jan 16, 2020 at 11:29 PM Gerd Hoffmann <kraxel at redhat.com> wrote:
> >
> > On Thu, Jan 16, 2020 at 12:33:25PM -0800, Chia-I Wu wrote:
> > > On Thu, Jan 16, 2020 at 4:58 AM Gerd Hoffmann <kraxel at redhat.com>
> wrote:
> > > >
> > > > On Mon, Jan 13, 2020 at 01:03:22PM -0800, Chia-I Wu wrote:
> > > > > Sorry I missed this email.
> > > > >
> > > > > On Thu, Jan 9, 2020 at 12:54 PM Dave Airlie <airlied at gmail.com>
> wrote:
> > > > > >
> > > > > > This is just an aside to the issue discussion, but I was
> wondering
> > > > > > before we heavily consider a vulkan multi-process model, could
> > > > > > we/should we prove a GL multi-process model first?
> > > > >
> > > > > I think there will just be the qemu process at first, with many GL
> > > > > contexts and one VkInstance for each guest VkInstance. Then there
> > > > > will be a switch to run different VkInstance in different processes
> > > > > (unlikely, but I wonder if this can be a vk layer). I did not plan
> > > > > for a multi-process GL model. Do you have anything you want to
> prove
> > > > > from it?
> > > >
> > > > Right now we have two models:
> > > > - Everything in qemu.
> > > > - Separate virgl process (see contrib/vhost-user-gpu/ in qemu),
> > > > but still all gl contexts in a single process.
> > > >
> > > > We could try to switch vhost-user-gpu to a one-process-per-context
> > > > model. I think it makes sense to at least think about the resource
> > > > management implications this would have (it would make virgl work
> > > > simliar to vulkan):
> > > >
> > > > - We would need a master process. It runs the virtqueues and
> manages
> > > > the resources.
> > >
> > > In the distant feature where we will be Vulkan-only, we will not want
> > > GL-specific paths. If we are to do multi-process GL now,
> >
> > [ Note; I don't think it buys us much to actually do that now, we have
> > enough to do even without that. But we should keep that in mind
> > when designing things ... ]
> >
> > > I think we should follow the multi-process Vulkan model, in the sense
> > > that GL resources should also be created in the per-context processes
> >
> > Yep, that would be good, we would not need a dma-buf for each and every
> > resource then. Problem here is backward compatibility. We simply can't
> > do that without changing the virtio protocol.
> >
> > So, I guess the options we have are:
> > (1) keep virgl mostly as-is and live with the downsides (which should
> > not be that much of a problem as long as one process manages all
> > GL contexts), or
> > (2) create virgl_v2, where resource management works very simliar to
> > the vulkan way of doing things, require the guest using that to
> > run gl+vk side-by-side. Old guests without vk support could
> > continue to use virgl_v1
>
> (1) still requires defining interop with vk. (2) seems like a
> reasonable requirement given that both drivers will be built from
> mesa. But there are also APIs who like a simple interface like
> virgl_v1 to allocate resources yet requires interop with vk. I guess
> both sound fine to me.
>
> The three resource models currently on the table are
>
> (A) A resource in the guest is a global driver object in the host.
> The global driver object is usable by all contexts and qemu.
> (B) A resource in the guest is a local driver object in the main
> renderer process in the host. VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE
> creates attachments and each attachment is a local object in a
> per-context process. VIRTIO_GPU_CMD_SET_SCANOUT creates a local
> object in qemu process.
> (C) A resource in the guest is an fd in the main renderer process in
> the host. The fd may be created locally by the main renderer process
> (e.g., udmabuf) or received from a per-context process.
> VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE sends the fd to another per-context
> process. VIRTIO_GPU_CMD_SET_SCANOUT works similar to in (B).
>
> (A) is the current model and does not support VK/GL interop.
This should be possible as it's pretty much what the Android Emulator has
already been doing for quite some time to have interop between GL and
Vulkan.
It separates the concept of a virtio-gpu resource ID versus GL/Vk
texture/extmemory ID's. A single virtio-gpu resource ID (aka ColorBuffer
handle or hostHandle in Android Emulator parlance)
can be associated with both an external memory VkImage and a GL texture
underneath, and the host uses either the VkImage or the GL texture
depending on who's calling, after adding a bit of sync. Thus it achieves
VK/GL interop via making the host do the work. In particular, if the host
supports GL_EXT_memory_objects, the interop may be in HW.
Android Emulator's workflow:
GL -> Vk:
1. The guest creates a gralloc buffer or an EGL drawable, which creates a
GL texture on host:
https://android.googlesource.com/device/generic/goldfish-opengl/+/refs/heads/master/system/egl/egl.cpp#667
https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emugl/host/libs/libOpenglRender/ColorBuffer.cpp#195
Now there is a global ID assoc. with the ColorBuffer (which is 1:1 with
virtio-gpu resource IDs:
https://android.googlesource.com/device/generic/goldfish-opengl/+/refs/heads/master/system/OpenglSystemCommon/HostConnection.cpp#227
(as
evidenced by the minigbm backend for Gralloc))
2. The guest imports that memory to Vk:
https://android.googlesource.com/device/generic/goldfish-opengl/+/refs/heads/master/system/vulkan_enc/ResourceTracker.cpp#1823
and sends that same ID back to the host:
https://android.googlesource.com/device/generic/goldfish-opengl/+/refs/heads/master/system/vulkan_enc/AndroidHardwareBuffer.cpp#168
3. The host reads the request to share GL memory with Vk:
https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emugl/host/libs/libOpenglRender/vulkan/VkDecoderGlobalState.cpp#2429
and creates an external Vk image:
https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emugl/host/libs/libOpenglRender/vulkan/VkCommonOperations.cpp#1200
and if HW interop is supported, the previous GL texture is nuked and
replaced with the Vk image:
https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emugl/host/libs/libOpenglRender/vulkan/VkCommonOperations.cpp#1328
Via GL_EXT_memory_objects:
https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emugl/host/libs/libOpenglRender/ColorBuffer.cpp#898
and the Vk image is updated with the previous contents if any:
https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emugl/host/libs/libOpenglRender/ColorBuffer.cpp#961
If HW interop is not supported, then appropriate calls are added at
particular places to manually copy to/from the VkImage (e.g., on window
surface flush:
https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emugl/host/libs/libOpenglRender/RenderControl.cpp#724
)
> (B) is
> designed to be compatible with (A) and the current virtio protocol.
> It allows multi-process GL as well as VK/GL interop, but it requires a
> dma-buf for each resource even when not really shared.
>
> (C) is the Vulkan model, but it is unclear how
> VIRTIO_GPU_CMD_RESOURCE_CREATE_3D works. I think we can think of the
> main process as a simple allocator as well.
> VIRTIO_GPU_CMD_RESOURCE_CREATE_3D makes the main process allocate
> (from GBM or GL) and create an fd , just like how the main process can
> allocate a udmabuf. This way this model can work with option (1).
>
>
>
> > > > - We would need a per-context process.
> > > > - VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE would make master dma-buf
> export a
> > > > resource and the per-context process import it. Sharing resources
> > > > works by calling VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE multipe times
> for
> > > > different contexts, and therefore master passing the dma-buf to
> > > > multiple per-context processes.
> > >
> > > I would like to see the export and import parts to be separated out
> > > and executed by other commands, but all three commands can be sent
> > > together by the guest kernel when it makes senses. Especially the
> > > import part, the guest vulkan wants to pass some metadata and specify
> > > an object id for the imported driver object.
> >
> > I don't want call this import/export, that term is overloaded too much
> > already. Also the "export" is needed for more than just export. It is
> > needed for everything the guest needs a gem bo for (mmap, scanout, ...).
> >
> > I guess something along the lines of OBJECT_TO_RESOURCE (add virtio
> > resource for vulkan object, aka "export") and RESOURCE_TO_OBJECT
> > ("import") would be better.
> >
> > Does GL have object IDs too?
> No. A resource in the guest is already a global GL object in the
> host. VIRTIO_GPU_CMD_SUBMIT_3D can use the resource ids directly.
>
> Vulkan wants object ids because there might be no resource ids when
> resources are not needed. When there are resource ids, they might
> point to fds and importing them as objects are not trivial. Also the
> same resource id might be imported multiple times to create multiple
> objects.
>
> >
> > cheers,
> > Gerd
> >
> _______________________________________________
> virglrenderer-devel mailing list
> virglrenderer-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/virglrenderer-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/virglrenderer-devel/attachments/20200117/86ce5837/attachment-0001.htm>
More information about the virglrenderer-devel
mailing list