[PATCH 0/2] drm/virtio: introduce the HOST_PAGE_SIZE feature

Sergio Lopez Pascual slp at redhat.com
Thu Aug 22 15:29:16 UTC 2024


Gurchetan Singh <gurchetansingh at chromium.org> writes:

> On Thu, Aug 8, 2024 at 3:38 AM Sergio Lopez Pascual <slp at redhat.com> wrote:
>
>> Gurchetan Singh <gurchetansingh at chromium.org> writes:
>>
>> > On Tue, Aug 6, 2024 at 1:15 PM Rob Clark <robdclark at gmail.com> wrote:
>> >
>> >> On Tue, Aug 6, 2024 at 9:15 AM Gurchetan Singh
>> >> <gurchetansingh at chromium.org> wrote:
>> >> >
>> >> >
>> >> >
>> >> > On Mon, Aug 5, 2024 at 2:14 AM Sergio Lopez Pascual <slp at redhat.com>
>> >> wrote:
>> >> >>
>> >> >> Dmitry Osipenko <dmitry.osipenko at collabora.com> writes:
>> >> >>
>> >> >> > On 7/23/24 14:49, Sergio Lopez wrote:
>> >> >> >> There's an incresing number of machines supporting multiple page
>> >> sizes
>> >> >> >> and on these machines the host and a guest can be running, each
>> one,
>> >> >> >> with a different page size.
>> >> >> >>
>> >> >> >> For what pertains to virtio-gpu, this is not a problem if the page
>> >> size
>> >> >> >> of the guest happens to be bigger or equal than the host, but will
>> >> >> >> potentially lead to failures in memory allocations and/or mappings
>> >> >> >> otherwise.
>> >> >> >
>> >> >> > Please describe concrete problem you're trying to solve. Guest
>> memory
>> >> >> > allocation consists of guest pages, I don't see how knowledge of
>> host
>> >> >> > page size helps anything in userspace.
>> >> >> >
>> >> >> > I suspect you want this for host blobs, but then it should be
>> >> >> > virtio_gpu_vram_create() that should use max(host_page_sz,
>> >> >> > guest_page_size), AFAICT. It's kernel who is responsible for memory
>> >> >> > management, userspace can't be trusted for doing that.
>> >> >>
>> >> >> Mesa's Vulkan/Venus uses CREATE_BLOB to request the host the creation
>> >> >> and mapping into the guest of device-backed memory and shmem regions.
>> >> >> The CREATE_BLOB ioctl doesn't update
>> drm_virtgpu_resource_create->size,
>> >> >> so the guest kernel (and, as a consequence, the host kernel) can't
>> >> >> override the user's request.
>> >> >>
>> >> >> I'd like Mesa's Vulkan/Venus in the guest to be able to obtain the
>> host
>> >> >> page size to align the size of the CREATE_BLOB requests as required.
>> >> >
>> >> >
>> >> > gfxstream solves this problem by putting the relevant information in
>> the
>> >> capabilities obtained from the host:
>> >> >
>> >> >
>> >>
>> https://android.googlesource.com/platform/hardware/google/gfxstream/+/refs/heads/main/host/virtio-gpu-gfxstream-renderer.cpp#1691
>> >> >
>> >> > If you want to be paranoid, you can also validate the
>> >> ResourceCreateBlob::size is properly host-page aligned when that request
>> >> reaches the host.
>> >> >
>> >> > So you can probably solve this problem using current interfaces.
>> >> Whether it's cleaner for all context types to use the capabilities, or
>> have
>> >> all VMMs to expose VIRTIO_GPU_F_HOST_PAGE_SIZE, would be the
>> cost/benefit
>> >> tradeoff.
>> >> >
>> >>
>> >> I guess solving it in a context-type specific way is possible.  But I
>> >> think it is a relatively universal constraint.  And maybe it makes
>> >> sense for virtgpu guest kernel to enforce alignment (at least it can
>> >> return an error synchronously) in addition to the host.
>> >>
>> >
>> > virtio-media may have support for VIRTIO_MEDIA_CMD_MMAP too, so could run
>> > into this issue.
>> >
>> >
>> https://github.com/chromeos/virtio-media?tab=readme-ov-file#shared-memory-regions
>> >
>> > virtio-fs also has the DAX window which uses the same memory mapping
>> > mechanism.
>> >
>> > https://virtio-fs.gitlab.io/design.html
>> >
>> > Maybe this should not be a virtio-gpu thing, but a virtio thing?
>>
>> This is true, but finding a common place to put the page size is really
>> hard in practice. I don't think we can borrow space in the feature bits
>> for that (and that would probably be abusing its purpose quite a bit)
>> and extending the transport configuration registers is quite cumbersome
>> and, in general, undesirable.
>>
>> That leaves us with the device-specific config space, and that implies a
>> device-specific feature bit as it's implemented in this series.
>>
>> The Shared Memory Regions on the VIRTIO spec, while doesn't talk
>> specifically about page size, also gives us a hint about this being the
>> right direction:
>>
>
> Can't we just modify the Shared Memory region PCI capability to include
> page size?  We can either:
>
> 1) keep the same size struct + header (VIRTIO_PCI_CAP_SHARED_MEMORY_CFG),
> and just hijack one of the padding fields. If the padding field is zero, we
> can just say it's 4096.

Yes, we can turn that padding into "__le16 page_size_order" to store
"PAGE_SIZE >> 12". That should be enough to secure some future-proofing.
There's also some space in the "MMIO Device Register Layout" to store it
as a 16 bit or 32 bit value.

This would require proposing it as a change to the VIRTIO specs. Do you
want to do it yourself or should I take the initiative?

Thanks,
Sergio.



More information about the dri-devel mailing list