[PATCH v2 0/7] Add virtio gpu userptr support
Huang, Honglei1
Honglei1.Huang at amd.com
Wed Apr 2 01:47:40 UTC 2025
On 2025/3/30 3:56, Demi Marie Obenour wrote:
> On 3/21/25 4:00 AM, Honglei Huang wrote:
>> From: Honglei Huang <Honglei1.Huang at amd.com>
>>
>> Hello,
>>
>> This series add virtio gpu userptr support and add libhsakmt capset.
>> The userptr feature is used for let host access guest user space memory,
>> this feature is used for GPU compute use case, to enable ROCm/OpenCL native
>> context. It should be pointed out that we are not to implement SVM here,
>> this is just a buffer based userptr implementation.
>> The libhsakmt capset is used for ROCm context, libhsakmt is like the role
>> of libdrm in Mesa.
>>
>> Patches 1-2 add libhsakmt capset and userptr blob resource flag.
>
> libhsakmt and userptr are orthogonal from each other.
> Should the libhsakmt context be a separate patch series?
I will separate libhsakmt capset patch into another patch series.
>
>> Patches 3-5 implement basic userptr feature, in some popular bench marks,
>> it has an efficiency of about 70% compared to bare metal in OpenCL API.
>> Patch 6 adds interval tree to manage userptrs and prevent duplicate creation.
>>
>> V2: - Split add HSAKMT context and blob userptr resource to two patches.
>> - Remove MMU notifier related patches, cause use not moveable user space
>> memory with MMU notifier is not a good idea.
>> - Remove HSAKMT context check when create context, let all the context
>> support the userptr feature.
>> - Remove MMU notifier related content in cover letter.
>> - Add more comments for patch 6 in cover letter.
>
> I have not looked at the implementation, but thanks for removing the MMU
> notifier support. Should the interval tree be added before the feature
> is exposed to userspace? That would prevent users who are doing kernel
> bisects from temporarily exposing a buggy feature to userspace.
Ok I will add interval tree patch before introduce the feature into user
space in next version. Really thanks for the review.
More information about the dri-devel
mailing list