[PATCH v4] drm/virtio: conditionally allocate virtio_gpu_fence
Gurchetan Singh
gurchetansingh at chromium.org
Fri Jul 7 21:31:24 UTC 2023
We don't want to create a fence for every command submission. It's
only necessary when userspace provides a waitable token for submission.
This could be:
1) bo_handles, to be used with VIRTGPU_WAIT
2) out_fence_fd, to be used with dma_fence apis
3) a ring_idx provided with VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK
+ DRM event API
4) syncobjs in the future
The use case for just submitting a command to the host, and expecting
no response. For example, gfxstream has GFXSTREAM_CONTEXT_PING that
just wakes up the host side worker threads. There's also
CROSS_DOMAIN_CMD_SEND which just sends data to the Wayland server.
This prevents the need to signal the automatically created
virtio_gpu_fence.
In addition, VIRTGPU_EXECBUF_RING_IDX is checked when creating a
DRM event object. VIRTGPU_CONTEXT_PARAM_POLL_RINGS_MASK is
already defined in terms of per-context rings. It was theoretically
possible to create a DRM event on the global timeline (ring_idx == 0),
if the context enabled DRM event polling. However, that wouldn't
work and userspace (Sommelier). Explicitly disallow it for
clarity.
Signed-off-by: Gurchetan Singh <gurchetansingh at chromium.org>
---
v2: Fix indent (Dmitry)
v3: Refactor drm fence event checks to avoid possible NULL deref (Dmitry)
v4: More detailed commit message about addition drm fence event checks (Dmitry)
drivers/gpu/drm/virtio/virtgpu_submit.c | 28 +++++++++++++------------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/virtio/virtgpu_submit.c b/drivers/gpu/drm/virtio/virtgpu_submit.c
index cf3c04b16a7a..004364cf86d7 100644
--- a/drivers/gpu/drm/virtio/virtgpu_submit.c
+++ b/drivers/gpu/drm/virtio/virtgpu_submit.c
@@ -64,13 +64,9 @@ static int virtio_gpu_fence_event_create(struct drm_device *dev,
struct virtio_gpu_fence *fence,
u32 ring_idx)
{
- struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
struct virtio_gpu_fence_event *e = NULL;
int ret;
- if (!(vfpriv->ring_idx_mask & BIT_ULL(ring_idx)))
- return 0;
-
e = kzalloc(sizeof(*e), GFP_KERNEL);
if (!e)
return -ENOMEM;
@@ -161,21 +157,27 @@ static int virtio_gpu_init_submit(struct virtio_gpu_submit *submit,
struct drm_file *file,
u64 fence_ctx, u32 ring_idx)
{
+ int err;
+ struct virtio_gpu_fence *out_fence;
struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
struct virtio_gpu_device *vgdev = dev->dev_private;
- struct virtio_gpu_fence *out_fence;
- int err;
+ bool drm_fence_event = (exbuf->flags & VIRTGPU_EXECBUF_RING_IDX) &&
+ (vfpriv->ring_idx_mask & BIT_ULL(ring_idx));
memset(submit, 0, sizeof(*submit));
- out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx);
- if (!out_fence)
- return -ENOMEM;
+ if ((exbuf->flags & VIRTGPU_EXECBUF_FENCE_FD_OUT) || drm_fence_event ||
+ exbuf->num_bo_handles)
+ out_fence = virtio_gpu_fence_alloc(vgdev, fence_ctx, ring_idx);
+ else
+ out_fence = NULL;
- err = virtio_gpu_fence_event_create(dev, file, out_fence, ring_idx);
- if (err) {
- dma_fence_put(&out_fence->f);
- return err;
+ if (drm_fence_event) {
+ err = virtio_gpu_fence_event_create(dev, file, out_fence, ring_idx);
+ if (err) {
+ dma_fence_put(&out_fence->f);
+ return err;
+ }
}
submit->out_fence = out_fence;
--
2.41.0.255.g8b1d071c50-goog
More information about the dri-devel
mailing list