<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Feb 28, 2020 at 12:27 PM Chia-I Wu <<a href="mailto:olvaffe@gmail.com">olvaffe@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Fri, Feb 28, 2020 at 11:23 AM Frank Yang <<a href="mailto:lfy@google.com" target="_blank">lfy@google.com</a>> wrote:<br>
><br>
><br>
><br>
> On Fri, Feb 28, 2020 at 11:07 AM Chia-I Wu <<a href="mailto:olvaffe@gmail.com" target="_blank">olvaffe@gmail.com</a>> wrote:<br>
>><br>
>> On Thu, Feb 27, 2020 at 5:37 PM Dave Airlie <<a href="mailto:airlied@gmail.com" target="_blank">airlied@gmail.com</a>> wrote:<br>
>> ><br>
>> > On Fri, 28 Feb 2020 at 08:07, Chia-I Wu <<a href="mailto:olvaffe@gmail.com" target="_blank">olvaffe@gmail.com</a>> wrote:<br>
>> > ><br>
>> > > On Thu, Feb 27, 2020 at 11:45 AM Dave Airlie <<a href="mailto:airlied@gmail.com" target="_blank">airlied@gmail.com</a>> wrote:<br>
>> > > ><br>
>> > > > Realised you might not be reading the list, or I asked too hard a question :-P<br>
>> > > Sorry that I missed this.<br>
>> > > ><br>
>> > > > On Tue, 25 Feb 2020 at 12:59, Dave Airlie <<a href="mailto:airlied@gmail.com" target="_blank">airlied@gmail.com</a>> wrote:<br>
>> > > > ><br>
>> > > > > Okay I think I'm following along the mutiprocess model, and the object<br>
>> > > > > id stuff, and I'm mostly coming around to the ideas presented.<br>
>> > > > ><br>
>> > > > > One question I have is how do we envisage the userspace vulkan driver<br>
>> > > > > using things.<br>
>> > > > ><br>
>> > > > > I kinda feel I'm missing the difference between APIs that access<br>
>> > > > > things on the CPU side and command for accessing things on the GPU<br>
>> > > > > side in the proposal. In the gallium world the "screen" allocates<br>
>> > > > > resources (memory + properties) synchronously on the API being<br>
>> > > > > accessed, the context is then for operating on GPU side things where<br>
>> > > > > we batch up a command stream and it is processed async.<br>
>> > > > ><br>
>> > > > > From the Vulkan API POV the application API is multi-thread safe, and<br>
>> > > > > we should avoid if we can taking too many locks under the covers, esp<br>
>> > > > > in common paths. Vulkan applications are also encouraged to allocate<br>
>> > > > > memory in large chunks and subdivide them between resources.<br>
>> > > > ><br>
>> > > > > I'm concerned that we are thinking of batching allocations in the<br>
>> > > > > userspace driver (or in the kernel) and how to flush those to the host<br>
>> > > > > side etc. If we have two threads in userspace allocate memory from the<br>
>> > > > > vulkan API, and one then does a transfer into the memory, how do we<br>
>> > > > > envisage that being flushed to the host side? Like if I allocate<br>
>> > > > > memory in one thread, then create images from that memory in another,<br>
>> > > > > how does that work out?<br>
>> > > > ><br>
>> > ><br>
>> > > The goal of encoding vkAllocateMemory in the execbuffer command stream<br>
>> > > is not for batching. It is to reuse the mechanism to send<br>
>> > > API-specific opaque alloc command to the host, and to allow<br>
>> > > allocations without resources (e.g., non-shareable allocations from a<br>
>> > > non-mappable heap do not need resources).<br>
>> > ><br>
>> > > In the current (but outdated) code[1], there is a per-VkInstance<br>
>> > > execbuffer command stream struct (struct vn_cs). Encoding to the<br>
>> > > vn_cs requires a per-instance lock to be taken. There is also a<br>
>> > > per-VkCommandBuffer vn_cs. Encoding to that vn_cs requires no<br>
>> > > locking. Multiple-threading is only beneficial when the app uses that<br>
>> > > to build their VkCommandBuffers.<br>
>> ><br>
>> > Imma gonna stop you there :-P, multithread vulkan apps are the normal<br>
>> > use case, not a special case. We do not design any vulkan things for<br>
>> > GL application ideas, Vulkan is different, multi-threaded command<br>
>> > buffer building is basic vulkan.<br>
>> That is how the current code looks like. It is very naive and my<br>
>> focus was also a vk.xml parser. I don't know if anyone has ever<br>
>> looked into the locking design (or command submission or sync<br>
>> primitives) more seriously. This can be a good chance to work out a<br>
>> design.<br>
>><br>
>><br>
>> ><br>
>> > Having a per-instance lock is bad if it's being taken across multiple<br>
>> > threads in normal use cases.<br>
>> ><br>
>> > Though it's quite likely due to VM design we have to take a lock at<br>
>> > some point on those paths, it would be good to be explicit in the<br>
>> > design of the impacts of every lock. Like we will likely need locks in<br>
>> > the kernel submission paths anyways.<br>
>><br>
>> The current design essentially looks at the first parameter (the<br>
>> dispatchable object) of a function, and if it is not externally synced<br>
>> and the function needs to be executed by the host, a cs lock is<br>
>> grabbed to encode the function. We can add cs to more dispatchable<br>
>> objects. But I think we are looking for ways to handle (or batch)<br>
>> functions locally to minimize locking.<br>
>><br>
>> One idea is that, say given this sequence<br>
>><br>
>> {vkCreateImage, vkBindImageMemory, vkCmdCopyImage }<br>
>><br>
><br>
> Android Emulator Vulkan does something similar to this in certain cases, like translating guest vkCreateImage requests to APIs that extract requirements along with the image:<br>
><br>
> <a href="https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emugl/host/libs/libOpenglRender/vulkan-registry/xml/vk.xml#6351" rel="noreferrer" target="_blank">https://android.googlesource.com/platform/external/qemu/+/refs/heads/emu-master-dev/android/android-emugl/host/libs/libOpenglRender/vulkan-registry/xml/vk.xml#6351</a><br>
><br>
> However, this opens up the possibility of a lot of grungy manual work. The solution that I'm going for long term is to automatically optimize the command protocol itself via something similar to PGO.<br>
Hm, I think it is fine to manually code functions outside of vkCmd*.<br>
They are too vastly different to be generated.<br>
<br>
The proposed idea makes the driver more like a real driver. When an<br>
object is created, it builds and embeds the HW descriptor (the<br>
serialized function call) in the object in the system ram. Only when<br>
the HW needs the descriptor it emits the descriptor to HW. That's the<br>
view of the guest driver. To the host, the guest driver sends it<br>
reordered Vulkan calls.<br>
<br>
The question becomes how does the driver minimize emits (locking,<br>
copying descriptors into the CS, and flushing) while making sure the<br>
reordering is legit. I guess that is what Dave wanted to know from<br>
the questions he asked, which I do not have an answer. There are also<br>
some cases such as vkMapMemory or vkWaitForFences where we must or<br>
want to handle in the guest.<br>
<br></blockquote><div><br></div><div>The information I'm thinking of would cover reorderings since it's information for codegenning valid + optimal protocol for all APIs calls; it would need to know when parameters and struct fields are created versus used versus destroyed and when they would be able to be early-destroyed. It would also specify/abstract what should be handled on what side.</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
><br>
>><br>
>> Instead of grabbing the per-instance (or per-device) lock for two<br>
>> times to encode the first two functions separately, we can encode the<br>
>> first two functions lock-free to a per-image storage first, and copy<br>
>> the contents into the cs last minute. vkCmdCopyImage is only shown as<br>
>> an example. We need to make sure the host sees the first two<br>
>> functions before it sees vkCmdCopyImage. It does not mean that<br>
>> vkCmdCopyImage triggers the copying and flushing.<br>
>><br>
>> There are also cases where things can be handled inside the guest.<br>
>> When a VkDeviceMemory has a guest shmem, vkMapMemory can be guest-only<br>
>> for example.<br>
>><br>
>><br>
>> ><br>
>> > > But vkAllocateMemory can be changed to use a local vn_cs or a local<br>
>> > > template to be lock-free. It will be like<br>
>> > ><br>
>> > > mem->object_id = next_object_id();<br>
>> > ><br>
>> > > local_cmd_templ[ALLOCATION_SIZE] = info->allocationSize;<br>
>> > > local_cmd_templ[MEMORY_TYPE_INDEX] = info->memoryTypeIndex;<br>
>> > > local_cmd_templ[OBJECT_ID] = mem->object_id;<br>
>> > ><br>
>> > > // when a resource is needed; otherwise, use EXECBUFFER instead<br>
>> > > struct drm_virtgpu_resource_create_blob args = {<br>
>> > > .size = info->allocationSize,<br>
>> > > .flags = VIRTGPU_RESOURCE_FLAG_STORAGE_HOSTMEM,<br>
>> > > .cmd_size = sizeof(local_cmd_templ),<br>
>> > > .cmd = local_cmd_templ,<br>
>> > > .object_id = mem->object_id<br>
>> > > };<br>
>> > > drmIoctl(fd, DRM_IOCTL_VIRTIO_GPU_RESOURCE_CREATE_BLOB, &args);<br>
>> > ><br>
>> > > mem->resource_id = args.res_handle;<br>
>> > > mem->bo = args.bo_handle;<br>
>> > ><br>
>> > > I think Gurchetan's proposal will look similar, except that the<br>
>> > > command stream will be replaced by something more flexible such that<br>
>> > > object id is optional.<br>
>> > ><br>
>> > > In the current design (v2), the host will<br>
>> > ><br>
>> > > - allocate a VkDeviceMemory from the app's VkInstance<br>
>> ><br>
>> > VkDeviceMemory is tied to VkDevice object not VkInstance. though this<br>
>> > makes sense either way.<br>
>><br>
>> Yeah, it is tied to VkDevice. I had one-instance-per-process model in<br>
>> mind and wanted to show the export/import part.<br>
>><br>
>> ><br>
>> > Okay I'm not entirely comfortable with this design yet, I probably<br>
>> > need to look at the code that's been done so far to get a better<br>
>> > feeling for it.<br>
>> Concern over resource allocation or the userspace driver? I hope it<br>
>> is mostly the latter...<br>
>><br>
>> ><br>
>> > With the instance_vn_cs, who flushes those to the host, how is that decided?<br>
>> The guest encodes functions in the order they are called (excluding<br>
>> vkCmd*). Flushes happen in vkGet*, vk*Wait*, vkAllocateMemory,<br>
>> vkQueueSubmit, vkEndCommandBuffer, and maybe some more. I don't think<br>
>> they are meaningful though.<br>
>><br>
>><br>
>> ><br>
>> > Dave.<br>
>> _______________________________________________<br>
>> virglrenderer-devel mailing list<br>
>> <a href="mailto:virglrenderer-devel@lists.freedesktop.org" target="_blank">virglrenderer-devel@lists.freedesktop.org</a><br>
>> <a href="https://lists.freedesktop.org/mailman/listinfo/virglrenderer-devel" rel="noreferrer" target="_blank">https://lists.freedesktop.org/mailman/listinfo/virglrenderer-devel</a><br>
><br>
> _______________________________________________<br>
> virglrenderer-devel mailing list<br>
> <a href="mailto:virglrenderer-devel@lists.freedesktop.org" target="_blank">virglrenderer-devel@lists.freedesktop.org</a><br>
> <a href="https://lists.freedesktop.org/mailman/listinfo/virglrenderer-devel" rel="noreferrer" target="_blank">https://lists.freedesktop.org/mailman/listinfo/virglrenderer-devel</a><br>
</blockquote></div></div>