<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 10, 2020 at 12:43 AM Gerd Hoffmann <<a href="mailto:kraxel@redhat.com" target="_blank">kraxel@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Mon, Mar 09, 2020 at 06:08:10PM -0700, Gurchetan Singh wrote:<br>
> We don't want fences from different 3D contexts/processes (GL, VK) to<br>
> be on the same timeline. Sending this out as a RFC to solicit feedback<br>
> on the general approach.<br>
<br>
NACK.<br>
<br>
virtio fences are global, period. You can't simply change fence<br>
semantics like this. At least not without a virtio protocol update<br>
because guest and host need to be on the same page here. </blockquote><div><br></div><div>I should've been more clear -- this is an internal cleanup/preparation and the per-context changes are invisible to host userspace.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Just look at<br>
virgl_renderer_callbacks->write_fences() and how it doesn't consider<br>
contexts at all.<br>
<br>
So one way out would be to add a virtio feature flag for this, so guest<br>
& host can negotiate whenever fences are global or per-context.<br></blockquote><div><br></div><div>Yeah, we'll need something like VIRTIO_GPU_FLAG_FENCE_V2 eventually, which means fences virgl_write_fence can not assume fence_id is the global sequence number. It's a bit tricky, and we have to consider a few cases: </div><div><div><br></div><div>1) Current kernel/current host userspace. Everything works.</div><div><br></div><div>2) Newer kernel (with this series) on current host userspace. For that, fence_id needs to be a monotonically increasing value, which it remains to be. I ran the standard test apps (Unigine Valley, dEQP, etc.) with this change and things seem fine.</div><div><br></div><div>3) Current kernel on newer host userspace. New host won't see VIRTIO_GPU_FLAG_FENCE_V2, everything should work as usual.</div><div><br></div><div>4) Newer kernel on new host host userspace. virgl_write_fence signals fences only in a specific context (or possibly only one fence at a time). The guest kernel processing based on {fence_id, fence_context} will make a difference in a multi-process environment.</div><div><br></div><div>If I have things right (and again, it's a bit tricky), so the virtio protocol update will be required at (4). It would be nice to get in refactorings to avoid mega-changes if we agree on the general approach..</div><div><br></div><div>Side note:</div><div><br></div><div>Fences do have an associated context ID in virglrenderer [1], though we don't pass down the correct ctx ID just yet [2].</div><div><br></div><div>[1] <a href="https://gitlab.freedesktop.org/virgl/virglrenderer/-/blob/master/src/virglrenderer.h#L204" target="_blank">https://gitlab.freedesktop.org/virgl/virglrenderer/-/blob/master/src/virglrenderer.h#L204</a></div><div>[2] <a href="https://github.com/qemu/qemu/blob/master/hw/display/virtio-gpu-3d.c#L490" target="_blank">https://github.com/qemu/qemu/blob/master/hw/display/virtio-gpu-3d.c#L490</a></div></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Another approach would be to go multiqueue, with each virtqueue being<br>
roughly the same as a rendering pipeline on physical hardware, then have<br>
per-virtqueue fences.<br></blockquote><div><br></div><div>Multi-queue sounds very interesting indeed, especially with VK multi-threaded command submission. That to me is V3 rather than V2.. let's start easy!</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
When going multiqueue we might also rethink the cursor queue approach.<br>
I think it makes sense to simply allow sending cursor commands to any<br>
queue then. A separate cursor queue continues to be an option for the<br>
guest then, but I'm not sure how useful that actually is in practice<br>
given that cursor image updates are regular resource updates and have to<br>
go through the control queue, so virtio_gpu_cursor_plane_update() has to<br>
wait for the resource update finish before it can queue the cursor<br>
command. I suspect the idea to fast-track cursor updates with a<br>
separate queue doesn't work that well in practice because of that.<br>
<br>
cheers,<br>
Gerd<br>
<br>
</blockquote></div></div>