<div dir="ltr"><div dir="ltr"><br></div><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, May 30, 2024 at 12:21 AM Kasireddy, Vivek <<a href="mailto:vivek.kasireddy@intel.com" target="_blank">vivek.kasireddy@intel.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi Gurchetan,<br>
<br>
> <br>
> On Fri, May 24, 2024 at 11:33 AM Kasireddy, Vivek<br>
> <<a href="mailto:vivek.kasireddy@intel.com" target="_blank">vivek.kasireddy@intel.com</a> <mailto:<a href="mailto:vivek.kasireddy@intel.com" target="_blank">vivek.kasireddy@intel.com</a>> > wrote:<br>
> <br>
> <br>
> Hi,<br>
> <br>
> Sorry, my previous reply got messed up as a result of HTML<br>
> formatting. This is<br>
> a plain text version of the same reply.<br>
> <br>
> ><br>
> ><br>
> > Having virtio-gpu import scanout buffers (via prime) from other<br>
> > devices means that we'd be adding a head to headless GPUs<br>
> assigned<br>
> > to a Guest VM or additional heads to regular GPU devices that<br>
> are<br>
> > passthrough'd to the Guest. In these cases, the Guest<br>
> compositor<br>
> > can render into the scanout buffer using a primary GPU and has<br>
> the<br>
> > secondary GPU (virtio-gpu) import it for display purposes.<br>
> ><br>
> > The main advantage with this is that the imported scanout<br>
> buffer can<br>
> > either be displayed locally on the Host (e.g, using Qemu + GTK<br>
> UI)<br>
> > or encoded and streamed to a remote client (e.g, Qemu + Spice<br>
> UI).<br>
> > Note that since Qemu uses udmabuf driver, there would be no<br>
> > copies<br>
> > made of the scanout buffer as it is displayed. This should be<br>
> > possible even when it might reside in device memory such has<br>
> > VRAM.<br>
> ><br>
> > The specific use-case that can be supported with this series is<br>
> when<br>
> > running Weston or other guest compositors with "additional-<br>
> devices"<br>
> > feature (./weston --drm-device=card1 --additional-<br>
> devices=card0).<br>
> > More info about this feature can be found at:<br>
> > <a href="https://gitlab.freedesktop.org/wayland/weston/-" rel="noreferrer" target="_blank">https://gitlab.freedesktop.org/wayland/weston/-</a><br>
> > /merge_requests/736<br>
> ><br>
> > In the above scenario, card1 could be a dGPU or an iGPU and<br>
> card0<br>
> > would be virtio-gpu in KMS only mode. However, the case<br>
> where this<br>
> > patch series could be particularly useful is when card1 is a GPU<br>
> VF<br>
> > that needs to share its scanout buffer (in a zero-copy way) with<br>
> the<br>
> > GPU PF on the Host. Or, it can also be useful when the scanout<br>
> buffer<br>
> > needs to be shared between any two GPU devices (assuming<br>
> one of<br>
> > them<br>
> > is assigned to a Guest VM) as long as they are P2P DMA<br>
> compatible.<br>
> ><br>
> ><br>
> ><br>
> > Is passthrough iGPU-only or passthrough dGPU-only something you<br>
> intend to<br>
> > use?<br>
> Our main use-case involves passthrough’g a headless dGPU VF device<br>
> and sharing<br>
> the Guest compositor’s scanout buffer with dGPU PF device on the<br>
> Host. Same goal for<br>
> headless iGPU VF to iGPU PF device as well.<br>
> <br>
> <br>
> <br>
> Just to check my understanding: the same physical {i, d}GPU is partitioned<br>
> into the VF and PF, but the PF handles host-side display integration and<br>
> rendering?<br>
Yes, that is mostly right. In a nutshell, the same physical GPU is partitioned<br>
into one PF device and multiple VF devices. Only the PF device has access to<br>
the display hardware and can do KMS (on the Host). The VF devices are<br>
headless with no access to display hardware (cannot do KMS but can do render/<br>
encode/decode) and are generally assigned (or passthrough'd) to the Guest VMs.<br>
Some more details about this model can be found here:<br>
<a href="https://lore.kernel.org/dri-devel/20231110182231.1730-1-michal.wajdeczko@intel.com/" rel="noreferrer" target="_blank">https://lore.kernel.org/dri-devel/20231110182231.1730-1-michal.wajdeczko@intel.com/</a><br>
<br>
> <br>
> <br>
> However, using a combination of iGPU and dGPU where either of<br>
> them can be passthrough’d<br>
> to the Guest is something I think can be supported with this patch<br>
> series as well.<br>
> <br>
> ><br>
> > If it's a dGPU + iGPU setup, then the way other people seem to do it<br>
> is a<br>
> > "virtualized" iGPU (via virgl/gfxstream/take your pick) and pass-<br>
> through the<br>
> > dGPU.<br>
> ><br>
> > For example, AMD seems to use virgl to allocate and import into<br>
> the dGPU.<br>
> ><br>
> > <a href="https://gitlab.freedesktop.org/mesa/mesa/-" rel="noreferrer" target="_blank">https://gitlab.freedesktop.org/mesa/mesa/-</a><br>
> /merge_requests/23896<br>
> ><br>
> > <a href="https://lore.kernel.org/all/20231221100016.4022353-1-" rel="noreferrer" target="_blank">https://lore.kernel.org/all/20231221100016.4022353-1-</a><br>
> > <a href="http://julia.zhang@amd.com/" rel="noreferrer" target="_blank">julia.zhang@amd.com/</a> <<a href="http://julia.zhang@amd.com/" rel="noreferrer" target="_blank">http://julia.zhang@amd.com/</a>><br>
> ><br>
> ><br>
> > ChromeOS also uses that method (see <a href="http://crrev.com/c/3764931" rel="noreferrer" target="_blank">crrev.com/c/3764931</a><br>
> <<a href="http://crrev.com/c/3764931" rel="noreferrer" target="_blank">http://crrev.com/c/3764931</a>><br>
> > <<a href="http://crrev.com/c/3764931" rel="noreferrer" target="_blank">http://crrev.com/c/3764931</a>> ) [cc: dGPU architect +Dominik Behr<br>
> > <mailto:<a href="mailto:dbehr@google.com" target="_blank">dbehr@google.com</a> <mailto:<a href="mailto:dbehr@google.com" target="_blank">dbehr@google.com</a>> > ]<br>
> ><br>
> > So if iGPU + dGPU is the primary use case, you should be able to<br>
> use these<br>
> > methods as well. The model would "virtualized iGPU" +<br>
> passthrough dGPU,<br>
> > not split SoCs.<br>
> In our use-case, the goal is to have only one primary GPU<br>
> (passthrough’d iGPU/dGPU)<br>
> do all the rendering (using native DRI drivers) for clients/compositor<br>
> and all the outputs<br>
> and share the scanout buffers with the secondary GPU (virtio-gpu).<br>
> Since this is mostly<br>
> how Mutter (and also Weston) work in a multi-GPU setup, I am not<br>
> sure if virgl is needed.<br>
> <br>
> <br>
> <br>
> I think you can probably use virgl with the PF and others probably will, but<br>
> supporting multiple methods in Linux is not unheard of.<br>
In our case, we have an alternative SR-IOV based GPU virtualization/partitioning<br>
model (as described above) where a Guest VM will have access to a hardware-accelerated<br>
GPU VF device for its rendering/encode/decode needs. So, in this situation, using<br>
virgl will become redundant and unnecessary.<br>
<br>
And, in this model, we intend to use virtio-gpu for KMS in the Guest VM (since the<br>
GPU VF device cannot do KMS) with the addition of this patchset. However, note that,<br>
since not all GPU SKUs/versions have the SRIOV capability, we plan on using virgl in<br>
those cases where it becomes necessary.<br>
<br>
> <br>
> Does your patchset need the Mesa kmsro patchset to function correctly?<br>
> <br>
> <a href="https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/9592" rel="noreferrer" target="_blank">https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/9592</a><br>
This patchset is an alternative proposal. So, KMSRO would not be needed.<br>
AFAICS, the above MR is mainly stalled because KMSRO uses dumb buffers<br>
which are not suitable for hardware-based rendering in all cases. And, KMSRO<br>
is not really helpful performance-wise with dGPUs, as it forces most buffers to<br>
be allocated from system memory.<br></blockquote><div><br></div><div>Previously, it was recommended when exploring VDMABUF: "the /important/ thing is that the driver which exports the dma-buf (and thus handles the mappings) must be aware of the virtualization so it can properly coordinate things with the host side."</div><div><br></div><div><a href="https://patchwork.kernel.org/project/linux-media/patch/20210203073517.1908882-3-vivek.kasireddy@intel.com/#23975915" target="_blank">https://patchwork.kernel.org/project/linux-media/patch/20210203073517.1908882-3-vivek.kasireddy@intel.com/#23975915</a> </div><div><br></div><div>So that's why the KMSRO approach was tried (virtio-gpu dumb allocations, not i915). But as you point out, nobody uses dumb buffers for hardware-based rendering.</div><div><br></div><div>So, if you are going with i915 allocates + virtio-gpu imports, it should be fine if you fixed all the issues with i915 allocates + VDMABUF imports. It seems your fixes add complexity in VFIO and other places, but having virtio-gpu 3d + virgl allocate adds complexity to Mesa-based allocation paths (i915, amdgpu would all have to open virtio-gpu render node, and pick a context type etc.). </div><div><br></div><div>I would just do virtio-gpu allocates, since it only requires user-space patches and no extra ioctls, but that reflects my preferences. If the mm/VFIO/QEMU people are fine with your approach, I see nothing wrong with merging it. </div><div><br></div><div>The one caveat is if someone uses a non-GTK/EGL host path, we'll have to pin memory for the lifetime of the import, since knowing RESOURCE_FLUSH is done is not sufficient. But if you're only using it, it shouldn't be an issue right now. </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
> <br>
> <br>
> If so, I would try to get that reviewed first to meet DRM requirements<br>
> (<a href="https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#open-source-" rel="noreferrer" target="_blank">https://dri.freedesktop.org/docs/drm/gpu/drm-uapi.html#open-source-</a><br>
> userspace-requirements). You might explicitly call out the design decision<br>
> you're making: ("We can probably use virgl as the virtualized iGPU via PF, but<br>
> that adds unnecessary complexity b/c ______").<br>
As I described above, what we have is an alternative GPU virtualization scheme<br>
where virgl is not necessary if SRIOV capability is available. And, as mentioned<br>
earlier, I have tested this series with Mutter/Gnome-shell (upstream master)<br>
(plus one small patch: <a href="https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3745" rel="noreferrer" target="_blank">https://gitlab.gnome.org/GNOME/mutter/-/merge_requests/3745</a>)<br>
and no other changes to any other userspace components on Host and Guest.<br>
<br>
> <br>
> <br>
> And, doing it this way means that no other userspace components<br>
> need to be modified<br>
> on both the Guest and the Host.<br>
> <br>
> ><br>
> ><br>
> ><br>
> > As part of the import, the virtio-gpu driver shares the dma<br>
> > addresses and lengths with Qemu which then determines<br>
> whether<br>
> > the<br>
> > memory region they belong to is owned by a PCI device or<br>
> whether it<br>
> > is part of the Guest's system ram. If it is the former, it identifies<br>
> > the devid (or bdf) and bar and provides this info (along with<br>
> offsets<br>
> > and sizes) to the udmabuf driver. In the latter case, instead of<br>
> the<br>
> > the devid and bar it provides the memfd. The udmabuf driver<br>
> then<br>
> > creates a dmabuf using this info that Qemu shares with Spice<br>
> for<br>
> > encode via Gstreamer.<br>
> ><br>
> > Note that the virtio-gpu driver registers a move_notify() callback<br>
> > to track location changes associated with the scanout buffer and<br>
> > sends attach/detach backing cmds to Qemu when appropriate.<br>
> And,<br>
> > synchronization (that is, ensuring that Guest and Host are not<br>
> > using the scanout buffer at the same time) is ensured by<br>
> pinning/<br>
> > unpinning the dmabuf as part of plane update and using a fence<br>
> > in resource_flush cmd.<br>
> ><br>
> ><br>
> > I'm not sure how QEMU's display paths work, but with crosvm if<br>
> you share<br>
> > the guest-created dmabuf with the display, and the guest moves<br>
> the backing<br>
> > pages, the only recourse is the destroy the surface and show a<br>
> black screen<br>
> > to the user: not the best thing experience wise.<br>
> Since Qemu GTK UI uses EGL, there is a blit done from the guest’s<br>
> scanout buffer onto an EGL<br>
> backed buffer on the Host. So, this problem would not happen as of<br>
> now.<br>
> <br>
> <br>
> <br>
> The guest kernel doesn't know you're using the QEMU GTK UI + EGL host-<br>
> side.<br>
So, with blob=true, there is a dma fence in resource_flush() that gets associated<br>
with the Blit/Encode on the Host. This guest dma fence should eventually be signalled<br>
only when the Host is done using guest's scanout buffer.<br>
<br>
> <br>
> If somebody wants to use the virtio-gpu import mechanism with lower-level<br>
> Wayland-based display integration, then the problem would occur.<br>
Right, one way to address this issue is to prevent the Guest compositor from<br>
reusing the scanout buffer (until the Host is done) and forcing it to pick a new<br>
buffer (since Mesa GBM allows 4 backbuffers). <br>
I have tried this experiment with KMSRO and Wayland-based Qemu UI previously<br>
on iGPUs (and Weston) and noticed that the Guest FPS was getting halved:<br>
<a href="https://lore.kernel.org/qemu-devel/20210913222036.3193732-1-vivek.kasireddy@intel.com/" rel="noreferrer" target="_blank">https://lore.kernel.org/qemu-devel/20210913222036.3193732-1-vivek.kasireddy@intel.com/</a><br>
<br>
and also discussed and proposed a solution which did not go anywhere:<br>
<a href="https://lore.kernel.org/dri-devel/20210913233529.3194401-1-vivek.kasireddy@intel.com/" rel="noreferrer" target="_blank">https://lore.kernel.org/dri-devel/20210913233529.3194401-1-vivek.kasireddy@intel.com/</a><br>
<br>
> <br>
> Perhaps, do that just to be safe unless you have performance concerns.<br>
If you meant pinning the imported scanout buffer in the Guest, then yes,<br>
that is something I am already doing in this patchset.<br>
<br>
> <br>
> <br>
> ><br>
> > Only amdgpu calls dma_buf_move_notfiy(..), and you're probably<br>
> testing on<br>
> > Intel only, so you may not be hitting that code path anyways.<br>
> I have tested with the Xe driver in the Guest which also calls<br>
> dma_buf_move_notfiy(). But<br>
> note that for dGPUs, both Xe and amdgpu migrate the scanout buffer<br>
> from vram to system<br>
> memory as part of export, because virtio-gpu is not P2P compatible.<br>
> However, I am hoping<br>
> to relax this (p2p check against virtio-gpu) in Xe driver if it detects<br>
> that it is running in<br>
> VF mode once the following patch series is merged:<br>
> <a href="https://lore.kernel.org/dri-devel/20240422063602.3690124-1-" rel="noreferrer" target="_blank">https://lore.kernel.org/dri-devel/20240422063602.3690124-1-</a><br>
> <a href="http://vivek.kasireddy@intel.com/" rel="noreferrer" target="_blank">vivek.kasireddy@intel.com/</a><br>
> <br>
> > I forgot the<br>
> > exact reason, but apparently udmabuf may not work with amdgpu<br>
> displays<br>
> > and it seems the virtualized iGPU + dGPU is the way to go for<br>
> amdgpu<br>
> > anyways.<br>
> I would really like to know why udmabuf would not work with<br>
> amdgpu?<br>
> <br>
> <br>
> <br>
> It's just a rumor I heard, but the idea is udmabuf would be imported into<br>
> AMDGPU_GEM_DOMAIN_CPU only.<br>
> <br>
> <a href="https://cgit.freedesktop.org/drm/drm-" rel="noreferrer" target="_blank">https://cgit.freedesktop.org/drm/drm-</a><br>
> misc/tree/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c#n333<br>
> <br>
> "AMDGPU_GEM_DOMAIN_CPU: System memory that is not GPU accessible.<br>
> Memory in this pool could be swapped out to disk if there is pressure."<br>
> <br>
> <a href="https://dri.freedesktop.org/docs/drm/gpu/amdgpu.html" rel="noreferrer" target="_blank">https://dri.freedesktop.org/docs/drm/gpu/amdgpu.html</a><br>
> <br>
> <br>
> Perhaps that limitation is artificial and unnecessary, and it may indeed work.<br>
> I don't think anybody has tried...<br>
Since udmabuf driver properly pins the backing pages (from memfd) for DMA,<br>
I don't see any reason why amdgpu would not be able to import.<br>
<br>
Thanks,<br>
Vivek<br>
<br>
> <br>
> <br>
> <br>
> <br>
> > So I recommend just pinning the buffer for the lifetime of the<br>
> > import for simplicity and correctness.<br>
> Yeah, in this patch series, the dmabuf is indeed pinned, but only for a<br>
> short duration in the Guest –<br>
> just until the Host is done using it (blit or encode).<br>
> <br>
> Thanks,<br>
> Vivek<br>
> <br>
> ><br>
> ><br>
> > This series is available at:<br>
> > <a href="https://gitlab.freedesktop.org/Vivek/drm-tip/-" rel="noreferrer" target="_blank">https://gitlab.freedesktop.org/Vivek/drm-tip/-</a><br>
> > /commits/virtgpu_import_rfc<br>
> ><br>
> > along with additional patches for Qemu and Spice here:<br>
> > <a href="https://gitlab.freedesktop.org/Vivek/qemu/-" rel="noreferrer" target="_blank">https://gitlab.freedesktop.org/Vivek/qemu/-</a><br>
> > /commits/virtgpu_dmabuf_pcidev<br>
> > <a href="https://gitlab.freedesktop.org/Vivek/spice/-" rel="noreferrer" target="_blank">https://gitlab.freedesktop.org/Vivek/spice/-</a><br>
> > /commits/encode_dmabuf_v4<br>
> ><br>
> > Patchset overview:<br>
> ><br>
> > Patch 1: Implement<br>
> > VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING cmd<br>
> > Patch 2-3: Helpers to initalize, import, free imported object<br>
> > Patch 4-5: Import and use buffers from other devices for<br>
> scanout<br>
> > Patch 6-7: Have udmabuf driver create dmabuf from PCI bars<br>
> for P2P<br>
> > DMA<br>
> ><br>
> > This series is tested using the following method:<br>
> > - Run Qemu with the following relevant options:<br>
> > qemu-system-x86_64 -m 4096m ....<br>
> > -device vfio-pci,host=0000:03:00.0<br>
> > -device virtio-<br>
> vga,max_outputs=1,blob=true,xres=1920,yres=1080<br>
> > -spice port=3001,gl=on,disable-ticketing=on,preferred-<br>
> > codec=gstreamer:h264<br>
> > -object memory-backend-memfd,id=mem1,size=4096M<br>
> > -machine memory-backend=mem1 ...<br>
> > - Run upstream Weston with the following options in the Guest<br>
> VM:<br>
> > ./weston --drm-device=card1 --additional-devices=card0<br>
> ><br>
> > where card1 is a DG2 dGPU (passthrough'd and using xe driver<br>
> in<br>
> > Guest VM),<br>
> > card0 is virtio-gpu and the Host is using a RPL iGPU.<br>
> ><br>
> > Cc: Gerd Hoffmann <<a href="mailto:kraxel@redhat.com" target="_blank">kraxel@redhat.com</a><br>
> <mailto:<a href="mailto:kraxel@redhat.com" target="_blank">kraxel@redhat.com</a>><br>
> > <mailto:<a href="mailto:kraxel@redhat.com" target="_blank">kraxel@redhat.com</a> <mailto:<a href="mailto:kraxel@redhat.com" target="_blank">kraxel@redhat.com</a>> > ><br>
> > Cc: Dongwon Kim <<a href="mailto:dongwon.kim@intel.com" target="_blank">dongwon.kim@intel.com</a><br>
> <mailto:<a href="mailto:dongwon.kim@intel.com" target="_blank">dongwon.kim@intel.com</a>><br>
> > <mailto:<a href="mailto:dongwon.kim@intel.com" target="_blank">dongwon.kim@intel.com</a><br>
> <mailto:<a href="mailto:dongwon.kim@intel.com" target="_blank">dongwon.kim@intel.com</a>> > ><br>
> > Cc: Daniel Vetter <<a href="mailto:daniel.vetter@ffwll.ch" target="_blank">daniel.vetter@ffwll.ch</a><br>
> <mailto:<a href="mailto:daniel.vetter@ffwll.ch" target="_blank">daniel.vetter@ffwll.ch</a>><br>
> > <mailto:<a href="mailto:daniel.vetter@ffwll.ch" target="_blank">daniel.vetter@ffwll.ch</a> <mailto:<a href="mailto:daniel.vetter@ffwll.ch" target="_blank">daniel.vetter@ffwll.ch</a>> > ><br>
> > Cc: Christian Koenig <<a href="mailto:christian.koenig@amd.com" target="_blank">christian.koenig@amd.com</a><br>
> <mailto:<a href="mailto:christian.koenig@amd.com" target="_blank">christian.koenig@amd.com</a>><br>
> > <mailto:<a href="mailto:christian.koenig@amd.com" target="_blank">christian.koenig@amd.com</a><br>
> <mailto:<a href="mailto:christian.koenig@amd.com" target="_blank">christian.koenig@amd.com</a>> > ><br>
> > Cc: Dmitry Osipenko <<a href="mailto:dmitry.osipenko@collabora.com" target="_blank">dmitry.osipenko@collabora.com</a><br>
> <mailto:<a href="mailto:dmitry.osipenko@collabora.com" target="_blank">dmitry.osipenko@collabora.com</a>><br>
> > <mailto:<a href="mailto:dmitry.osipenko@collabora.com" target="_blank">dmitry.osipenko@collabora.com</a><br>
> <mailto:<a href="mailto:dmitry.osipenko@collabora.com" target="_blank">dmitry.osipenko@collabora.com</a>> > ><br>
> > Cc: Rob Clark <<a href="mailto:robdclark@chromium.org" target="_blank">robdclark@chromium.org</a><br>
> <mailto:<a href="mailto:robdclark@chromium.org" target="_blank">robdclark@chromium.org</a>><br>
> > <mailto:<a href="mailto:robdclark@chromium.org" target="_blank">robdclark@chromium.org</a><br>
> <mailto:<a href="mailto:robdclark@chromium.org" target="_blank">robdclark@chromium.org</a>> > ><br>
> > Cc: Thomas Hellström <<a href="mailto:thomas.hellstrom@linux.intel.com" target="_blank">thomas.hellstrom@linux.intel.com</a><br>
> <mailto:<a href="mailto:thomas.hellstrom@linux.intel.com" target="_blank">thomas.hellstrom@linux.intel.com</a>><br>
> > <mailto:<a href="mailto:thomas.hellstrom@linux.intel.com" target="_blank">thomas.hellstrom@linux.intel.com</a><br>
> <mailto:<a href="mailto:thomas.hellstrom@linux.intel.com" target="_blank">thomas.hellstrom@linux.intel.com</a>> > ><br>
> > Cc: Oded Gabbay <<a href="mailto:ogabbay@kernel.org" target="_blank">ogabbay@kernel.org</a><br>
> <mailto:<a href="mailto:ogabbay@kernel.org" target="_blank">ogabbay@kernel.org</a>><br>
> > <mailto:<a href="mailto:ogabbay@kernel.org" target="_blank">ogabbay@kernel.org</a> <mailto:<a href="mailto:ogabbay@kernel.org" target="_blank">ogabbay@kernel.org</a>> > ><br>
> > Cc: Michal Wajdeczko <<a href="mailto:michal.wajdeczko@intel.com" target="_blank">michal.wajdeczko@intel.com</a><br>
> <mailto:<a href="mailto:michal.wajdeczko@intel.com" target="_blank">michal.wajdeczko@intel.com</a>><br>
> > <mailto:<a href="mailto:michal.wajdeczko@intel.com" target="_blank">michal.wajdeczko@intel.com</a><br>
> <mailto:<a href="mailto:michal.wajdeczko@intel.com" target="_blank">michal.wajdeczko@intel.com</a>> > ><br>
> > Cc: Michael Tretter <<a href="mailto:m.tretter@pengutronix.de" target="_blank">m.tretter@pengutronix.de</a><br>
> <mailto:<a href="mailto:m.tretter@pengutronix.de" target="_blank">m.tretter@pengutronix.de</a>><br>
> > <mailto:<a href="mailto:m.tretter@pengutronix.de" target="_blank">m.tretter@pengutronix.de</a><br>
> <mailto:<a href="mailto:m.tretter@pengutronix.de" target="_blank">m.tretter@pengutronix.de</a>> > ><br>
> ><br>
> > Vivek Kasireddy (7):<br>
> > drm/virtio: Implement<br>
> > VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING cmd<br>
> > drm/virtio: Add a helper to map and note the dma addrs and<br>
> > lengths<br>
> > drm/virtio: Add helpers to initialize and free the imported<br>
> object<br>
> > drm/virtio: Import prime buffers from other devices as guest<br>
> blobs<br>
> > drm/virtio: Ensure that bo's backing store is valid while<br>
> updating<br>
> > plane<br>
> > udmabuf/uapi: Add new ioctl to create a dmabuf from PCI bar<br>
> > regions<br>
> > udmabuf: Implement UDMABUF_CREATE_LIST_FOR_PCIDEV<br>
> ioctl<br>
> ><br>
> > drivers/dma-buf/udmabuf.c | 122 ++++++++++++++++--<br>
> > drivers/gpu/drm/virtio/virtgpu_drv.h | 8 ++<br>
> > drivers/gpu/drm/virtio/virtgpu_plane.c | 56 ++++++++-<br>
> > drivers/gpu/drm/virtio/virtgpu_prime.c | 167<br>
> > ++++++++++++++++++++++++-<br>
> > drivers/gpu/drm/virtio/virtgpu_vq.c | 15 +++<br>
> > include/uapi/linux/udmabuf.h | 11 +-<br>
> > 6 files changed, 368 insertions(+), 11 deletions(-)<br>
> ><br>
> > --<br>
> > 2.43.0<br>
> ><br>
> ><br>
> <br>
> <br>
<br>
</blockquote></div></div>
</div>