<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, May 24, 2024 at 11:33 AM Kasireddy, Vivek <<a href="mailto:vivek.kasireddy@intel.com" target="_blank">vivek.kasireddy@intel.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
Sorry, my previous reply got messed up as a result of HTML formatting. This is<br>
a plain text version of the same reply.<br>
<br>
> <br>
> <br>
> Having virtio-gpu import scanout buffers (via prime) from other<br>
> devices means that we'd be adding a head to headless GPUs assigned<br>
> to a Guest VM or additional heads to regular GPU devices that are<br>
> passthrough'd to the Guest. In these cases, the Guest compositor<br>
> can render into the scanout buffer using a primary GPU and has the<br>
> secondary GPU (virtio-gpu) import it for display purposes.<br>
> <br>
> The main advantage with this is that the imported scanout buffer can<br>
> either be displayed locally on the Host (e.g, using Qemu + GTK UI)<br>
> or encoded and streamed to a remote client (e.g, Qemu + Spice UI).<br>
> Note that since Qemu uses udmabuf driver, there would be no<br>
> copies<br>
> made of the scanout buffer as it is displayed. This should be<br>
> possible even when it might reside in device memory such has<br>
> VRAM.<br>
> <br>
> The specific use-case that can be supported with this series is when<br>
> running Weston or other guest compositors with "additional-devices"<br>
> feature (./weston --drm-device=card1 --additional-devices=card0).<br>
> More info about this feature can be found at:<br>
> <a href="https://gitlab.freedesktop.org/wayland/weston/-" rel="noreferrer" target="_blank">https://gitlab.freedesktop.org/wayland/weston/-</a><br>
> /merge_requests/736<br>
> <br>
> In the above scenario, card1 could be a dGPU or an iGPU and card0<br>
> would be virtio-gpu in KMS only mode. However, the case where this<br>
> patch series could be particularly useful is when card1 is a GPU VF<br>
> that needs to share its scanout buffer (in a zero-copy way) with the<br>
> GPU PF on the Host. Or, it can also be useful when the scanout buffer<br>
> needs to be shared between any two GPU devices (assuming one of<br>
> them<br>
> is assigned to a Guest VM) as long as they are P2P DMA compatible.<br>
> <br>
> <br>
> <br>
> Is passthrough iGPU-only or passthrough dGPU-only something you intend to<br>
> use?<br>
Our main use-case involves passthrough’g a headless dGPU VF device and sharing<br>
the Guest compositor’s scanout buffer with dGPU PF device on the Host. Same goal for<br>
headless iGPU VF to iGPU PF device as well.<br></blockquote><div><br></div><div>Just to check my understanding: the same physical {i, d}GPU is partitioned into the VF and PF, but the PF handles host-side display integration and rendering?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
However, using a combination of iGPU and dGPU where either of them can be passthrough’d<br>
to the Guest is something I think can be supported with this patch series as well.<br>
<br>
> <br>
> If it's a dGPU + iGPU setup, then the way other people seem to do it is a<br>
> "virtualized" iGPU (via virgl/gfxstream/take your pick) and pass-through the<br>
> dGPU.<br>
> <br>
> For example, AMD seems to use virgl to allocate and import into the dGPU.<br>
> <br>
> <a href="https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23896" rel="noreferrer" target="_blank">https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/23896</a><br>
> <br>
> <a href="https://lore.kernel.org/all/20231221100016.4022353-1-" rel="noreferrer" target="_blank">https://lore.kernel.org/all/20231221100016.4022353-1-</a><br>
> <a href="http://julia.zhang@amd.com/" rel="noreferrer" target="_blank">julia.zhang@amd.com/</a><br>
> <br>
> <br>
> ChromeOS also uses that method (see <a href="http://crrev.com/c/3764931" rel="noreferrer" target="_blank">crrev.com/c/3764931</a><br>
> <<a href="http://crrev.com/c/3764931" rel="noreferrer" target="_blank">http://crrev.com/c/3764931</a>> ) [cc: dGPU architect +Dominik Behr<br>
> <mailto:<a href="mailto:dbehr@google.com" target="_blank">dbehr@google.com</a>> ]<br>
> <br>
> So if iGPU + dGPU is the primary use case, you should be able to use these<br>
> methods as well. The model would "virtualized iGPU" + passthrough dGPU,<br>
> not split SoCs.<br>
In our use-case, the goal is to have only one primary GPU (passthrough’d iGPU/dGPU)<br>
do all the rendering (using native DRI drivers) for clients/compositor and all the outputs<br>
and share the scanout buffers with the secondary GPU (virtio-gpu). Since this is mostly<br>
how Mutter (and also Weston) work in a multi-GPU setup, I am not sure if virgl is needed.<br></blockquote><div><br></div><div>I think you can probably use virgl with the PF and others probably will, but supporting multiple methods in Linux is not unheard of. </div><div><br></div><div>Does your patchset need the Mesa kmsro patchset to function correctly? </div><div><br></div><div><div><a href="https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/9592" target="_blank">https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/9592</a><br></div><div><br></div><div>If so, I would try to get that reviewed first to meet DRM requirements (<a href="https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#open-source-userspace-requirements" target="_blank">https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#open-source-userspace-requirements</a>). You might explicitly call out the design decision you're making: ("We can probably use virgl as the virtualized iGPU via PF, but that adds unnecessary complexity b/c ______"). </div></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">And, doing it this way means that no other userspace components need to be modified<br>
on both the Guest and the Host.<br>
<br>
> <br>
> <br>
> <br>
> As part of the import, the virtio-gpu driver shares the dma<br>
> addresses and lengths with Qemu which then determines whether<br>
> the<br>
> memory region they belong to is owned by a PCI device or whether it<br>
> is part of the Guest's system ram. If it is the former, it identifies<br>
> the devid (or bdf) and bar and provides this info (along with offsets<br>
> and sizes) to the udmabuf driver. In the latter case, instead of the<br>
> the devid and bar it provides the memfd. The udmabuf driver then<br>
> creates a dmabuf using this info that Qemu shares with Spice for<br>
> encode via Gstreamer.<br>
> <br>
> Note that the virtio-gpu driver registers a move_notify() callback<br>
> to track location changes associated with the scanout buffer and<br>
> sends attach/detach backing cmds to Qemu when appropriate. And,<br>
> synchronization (that is, ensuring that Guest and Host are not<br>
> using the scanout buffer at the same time) is ensured by pinning/<br>
> unpinning the dmabuf as part of plane update and using a fence<br>
> in resource_flush cmd.<br>
> <br>
> <br>
> I'm not sure how QEMU's display paths work, but with crosvm if you share<br>
> the guest-created dmabuf with the display, and the guest moves the backing<br>
> pages, the only recourse is the destroy the surface and show a black screen<br>
> to the user: not the best thing experience wise.<br>
Since Qemu GTK UI uses EGL, there is a blit done from the guest’s scanout buffer onto an EGL<br>
backed buffer on the Host. So, this problem would not happen as of now.<br></blockquote><div><br></div><div>The guest kernel doesn't know you're using the QEMU GTK UI + EGL host-side. </div><div><br></div><div>If somebody wants to use the virtio-gpu import mechanism with lower-level Wayland-based display integration, then the problem would occur. </div><div><br></div><div>Perhaps, do that just to be safe unless you have performance concerns. </div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">> <br>
> Only amdgpu calls dma_buf_move_notfiy(..), and you're probably testing on<br>
> Intel only, so you may not be hitting that code path anyways.<br>
I have tested with the Xe driver in the Guest which also calls dma_buf_move_notfiy(). But<br>
note that for dGPUs, both Xe and amdgpu migrate the scanout buffer from vram to system<br>
memory as part of export, because virtio-gpu is not P2P compatible. However, I am hoping<br>
to relax this (p2p check against virtio-gpu) in Xe driver if it detects that it is running in<br>
VF mode once the following patch series is merged:<br>
<a href="https://lore.kernel.org/dri-devel/20240422063602.3690124-1-vivek.kasireddy@intel.com/" rel="noreferrer" target="_blank">https://lore.kernel.org/dri-devel/20240422063602.3690124-1-vivek.kasireddy@intel.com/</a><br>
<br>
> I forgot the<br>
> exact reason, but apparently udmabuf may not work with amdgpu displays<br>
> and it seems the virtualized iGPU + dGPU is the way to go for amdgpu<br>
> anyways. <br>
I would really like to know why udmabuf would not work with amdgpu?<br></blockquote><div><br></div><div>It's just a rumor I heard, but the idea is udmabuf would be imported into AMDGPU_GEM_DOMAIN_CPU only.</div><br><div><a href="https://cgit.freedesktop.org/drm/drm-misc/tree/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c#n333">https://cgit.freedesktop.org/drm/drm-misc/tree/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c#n333</a></div><div><br></div><div>"AMDGPU_GEM_DOMAIN_CPU: System memory that is not GPU accessible. Memory in this pool could be swapped out to disk if there is pressure."</div><div><br></div><div><a href="https://dri.freedesktop.org/docs/drm/gpu/amdgpu.html">https://dri.freedesktop.org/docs/drm/gpu/amdgpu.html</a><br></div><div><br></div><div>Perhaps that limitation is artificial and unnecessary, and it may indeed work. I don't think anybody has tried...</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> So I recommend just pinning the buffer for the lifetime of the<br>
> import for simplicity and correctness.<br>
Yeah, in this patch series, the dmabuf is indeed pinned, but only for a short duration in the Guest –<br>
just until the Host is done using it (blit or encode).<br>
<br>
Thanks,<br>
Vivek<br>
<br>
> <br>
> <br>
> This series is available at:<br>
> <a href="https://gitlab.freedesktop.org/Vivek/drm-tip/-" rel="noreferrer" target="_blank">https://gitlab.freedesktop.org/Vivek/drm-tip/-</a><br>
> /commits/virtgpu_import_rfc<br>
> <br>
> along with additional patches for Qemu and Spice here:<br>
> <a href="https://gitlab.freedesktop.org/Vivek/qemu/-" rel="noreferrer" target="_blank">https://gitlab.freedesktop.org/Vivek/qemu/-</a><br>
> /commits/virtgpu_dmabuf_pcidev<br>
> <a href="https://gitlab.freedesktop.org/Vivek/spice/-" rel="noreferrer" target="_blank">https://gitlab.freedesktop.org/Vivek/spice/-</a><br>
> /commits/encode_dmabuf_v4<br>
> <br>
> Patchset overview:<br>
> <br>
> Patch 1: Implement<br>
> VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING cmd<br>
> Patch 2-3: Helpers to initalize, import, free imported object<br>
> Patch 4-5: Import and use buffers from other devices for scanout<br>
> Patch 6-7: Have udmabuf driver create dmabuf from PCI bars for P2P<br>
> DMA<br>
> <br>
> This series is tested using the following method:<br>
> - Run Qemu with the following relevant options:<br>
> qemu-system-x86_64 -m 4096m ....<br>
> -device vfio-pci,host=0000:03:00.0<br>
> -device virtio-vga,max_outputs=1,blob=true,xres=1920,yres=1080<br>
> -spice port=3001,gl=on,disable-ticketing=on,preferred-<br>
> codec=gstreamer:h264<br>
> -object memory-backend-memfd,id=mem1,size=4096M<br>
> -machine memory-backend=mem1 ...<br>
> - Run upstream Weston with the following options in the Guest VM:<br>
> ./weston --drm-device=card1 --additional-devices=card0<br>
> <br>
> where card1 is a DG2 dGPU (passthrough'd and using xe driver in<br>
> Guest VM),<br>
> card0 is virtio-gpu and the Host is using a RPL iGPU.<br>
> <br>
> Cc: Gerd Hoffmann <<a href="mailto:kraxel@redhat.com" target="_blank">kraxel@redhat.com</a><br>
> <mailto:<a href="mailto:kraxel@redhat.com" target="_blank">kraxel@redhat.com</a>> ><br>
> Cc: Dongwon Kim <<a href="mailto:dongwon.kim@intel.com" target="_blank">dongwon.kim@intel.com</a><br>
> <mailto:<a href="mailto:dongwon.kim@intel.com" target="_blank">dongwon.kim@intel.com</a>> ><br>
> Cc: Daniel Vetter <<a href="mailto:daniel.vetter@ffwll.ch" target="_blank">daniel.vetter@ffwll.ch</a><br>
> <mailto:<a href="mailto:daniel.vetter@ffwll.ch" target="_blank">daniel.vetter@ffwll.ch</a>> ><br>
> Cc: Christian Koenig <<a href="mailto:christian.koenig@amd.com" target="_blank">christian.koenig@amd.com</a><br>
> <mailto:<a href="mailto:christian.koenig@amd.com" target="_blank">christian.koenig@amd.com</a>> ><br>
> Cc: Dmitry Osipenko <<a href="mailto:dmitry.osipenko@collabora.com" target="_blank">dmitry.osipenko@collabora.com</a><br>
> <mailto:<a href="mailto:dmitry.osipenko@collabora.com" target="_blank">dmitry.osipenko@collabora.com</a>> ><br>
> Cc: Rob Clark <<a href="mailto:robdclark@chromium.org" target="_blank">robdclark@chromium.org</a><br>
> <mailto:<a href="mailto:robdclark@chromium.org" target="_blank">robdclark@chromium.org</a>> ><br>
> Cc: Thomas Hellström <<a href="mailto:thomas.hellstrom@linux.intel.com" target="_blank">thomas.hellstrom@linux.intel.com</a><br>
> <mailto:<a href="mailto:thomas.hellstrom@linux.intel.com" target="_blank">thomas.hellstrom@linux.intel.com</a>> ><br>
> Cc: Oded Gabbay <<a href="mailto:ogabbay@kernel.org" target="_blank">ogabbay@kernel.org</a><br>
> <mailto:<a href="mailto:ogabbay@kernel.org" target="_blank">ogabbay@kernel.org</a>> ><br>
> Cc: Michal Wajdeczko <<a href="mailto:michal.wajdeczko@intel.com" target="_blank">michal.wajdeczko@intel.com</a><br>
> <mailto:<a href="mailto:michal.wajdeczko@intel.com" target="_blank">michal.wajdeczko@intel.com</a>> ><br>
> Cc: Michael Tretter <<a href="mailto:m.tretter@pengutronix.de" target="_blank">m.tretter@pengutronix.de</a><br>
> <mailto:<a href="mailto:m.tretter@pengutronix.de" target="_blank">m.tretter@pengutronix.de</a>> ><br>
> <br>
> Vivek Kasireddy (7):<br>
> drm/virtio: Implement<br>
> VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING cmd<br>
> drm/virtio: Add a helper to map and note the dma addrs and<br>
> lengths<br>
> drm/virtio: Add helpers to initialize and free the imported object<br>
> drm/virtio: Import prime buffers from other devices as guest blobs<br>
> drm/virtio: Ensure that bo's backing store is valid while updating<br>
> plane<br>
> udmabuf/uapi: Add new ioctl to create a dmabuf from PCI bar<br>
> regions<br>
> udmabuf: Implement UDMABUF_CREATE_LIST_FOR_PCIDEV ioctl<br>
> <br>
> drivers/dma-buf/udmabuf.c | 122 ++++++++++++++++--<br>
> drivers/gpu/drm/virtio/virtgpu_drv.h | 8 ++<br>
> drivers/gpu/drm/virtio/virtgpu_plane.c | 56 ++++++++-<br>
> drivers/gpu/drm/virtio/virtgpu_prime.c | 167<br>
> ++++++++++++++++++++++++-<br>
> drivers/gpu/drm/virtio/virtgpu_vq.c | 15 +++<br>
> include/uapi/linux/udmabuf.h | 11 +-<br>
> 6 files changed, 368 insertions(+), 11 deletions(-)<br>
> <br>
> --<br>
> 2.43.0<br>
> <br>
> <br>
<br>
</blockquote></div></div>