[Intel-xe] [RFC PATCH 0/2] Implement vma madvise ioctl
Nirmoy Das
nirmoy.das at linux.intel.com
Tue May 30 15:42:40 UTC 2023
On 5/30/2023 5:37 PM, Thomas Hellström wrote:
> Hi, Nirmoy
>
> On 5/30/23 17:18, Nirmoy Das wrote:
>> On 5/30/2023 3:19 PM, Thomas Hellström wrote:
>>
>>>
>>> On 5/24/23 22:12, Nirmoy Das wrote:
>>>>
>>>> On 5/24/2023 8:30 PM, Matthew Brost wrote:
>>>>> On Wed, May 24, 2023 at 02:36:46PM +0200, Nirmoy Das wrote:
>>>>>> Sending this initial RFC patch series for vma madvise ioctl
>>>>>> to gether feedback if this the correct way to do that.
>>>>>>
>>>>>> I am adding two expected options for userspace to pass
>>>>>>
>>>>>> DRM_XE_VMA_MADVISE_WILLNEED:
>>>>>> * Set ttm priority to normal/high(if cap permits)
>>>>>> * Make sure VMAs are in allowed placement and bound.
>>>>>>
>>>>>> DRM_XE_VMA_MADVISE_DONTNEED:
>>>>>> * Set ttm priority to low so the BO belong to the vma
>>>>>> become early target for eviction.
>>>>>> * Make sure VMAs are not bound.
>>>>>>
>>>>>> Questions:
>>>>>> Should this be part of DRM_IOCTL_XE_VM_MADVISE rather than
>>>>>> creating new ioctl?
>>>>>>
>>>>> Def not a new IOCTL. Let's take a step back, what are you trying to
>>>>> implement that the current DRM_IOCTL_XE_VM_MADVISE IOCTL / VM bind
>>>>> IOCTL
>>>>> does not support?
>>>>
>>>> AFAIU at this moment:
>>>>
>>>> MADVISE_WILLNEED == XE_VM_BIND_OP_PREFETCH +
>>>> DRM_XE_VM_MADVISE_PRIORITY
>>>> MADVISE_DONTNEED == XE_VM_BIND_OP_UNMAP + DRM_XE_VM_MADVISE_PRIORITY
>>>>
>>>> So unless we need explicit madvise ioctl or vm_madvise ioctl
>>>> options for UMD,
>>>> I think we can have madvise equivalent with above vm bind and vm
>>>> madvise ioctl.
>>>>
>>>> Hi Thomas, Joonas,
>>>>
>>>> What do you think ?
>>>
>>> Hi, Nirmoy,
>>>
>>> the functionality we need for DONTNEED and WILLNEED would, based on
>>> i915 and the move to vma-based would IMO be something along the
>>> lines of:
>>>
>>> DONTNEED
>>> 1) if userptr, unbind (or perhaps -EINVAL)
>>> 2) If bo, mark the vma as dontneed. If all other vmas of the bo are
>>> marked dontneed, mark the bo as dontneed and adjust its priority.
>>> 3) If a dontneed bo is marked for eviction, unbind its vmas, kill
>>> its storage and mark it as purged. Don't put on rebind list.
>>>
>>> WILLNEED
>>> 1) If userptr, bind (or perhaps -EINVAL)
>>> 2) If bo, remove dontneed marker. If bo was purged, notify
>>> user-space. (Need feedback from UMD whether they want an error
>>> message or just a fresh backing store). Adjust its priority, put vma
>>> on rebind list.
>>>
>>> So in particular, if no eviction / shrinking happens between
>>> DONTNEED and WILLNEED they will essentially be NOOPs.
>>>
>>> It sounds like this is a bit different than what can be achieved
>>> with prefetch/unmap/priority.
>>
>> HI Thomas,
>>
>> Thanks for your detailed response. This is indeed not completely
>> doable currently.
>>
>> Should we have a separate ioctl for this or two new options in
>> DRM_IOCTL_XE_VM_MADVISE ?
>
>
> IMHO we should reuse the existing VM_MADVISE ioctl if possible.
Yes, makes sense!
Thanks,
Nirmoy
>
> /Thomas
>
>
>
>>
>>>
>>> /Thomas
>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Nirmoy
>>>>
>>>>>
>>>>> Matt
>>>>>
>>>>>> Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>
>>>>>> Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
>>>>>> Cc: Matthew Brost <matthew.brost at intel.com>
>>>>>>
>>>>>> Nirmoy Das (2):
>>>>>> drm/xe: Expose vma bind-unbind functions
>>>>>> drm/xe: Implement madvise ioctl for vma
>>>>>>
>>>>>> drivers/gpu/drm/xe/Makefile | 1 +
>>>>>> drivers/gpu/drm/xe/xe_device.c | 2 +
>>>>>> drivers/gpu/drm/xe/xe_vm.c | 52 +++----
>>>>>> drivers/gpu/drm/xe/xe_vm.h | 3 +
>>>>>> drivers/gpu/drm/xe/xe_vma_madvise.c | 223
>>>>>> ++++++++++++++++++++++++++++
>>>>>> drivers/gpu/drm/xe/xe_vma_madvise.h | 15 ++
>>>>>> include/uapi/drm/xe_drm.h | 28 ++++
>>>>>> 7 files changed, 296 insertions(+), 28 deletions(-)
>>>>>> create mode 100644 drivers/gpu/drm/xe/xe_vma_madvise.c
>>>>>> create mode 100644 drivers/gpu/drm/xe/xe_vma_madvise.h
>>>>>>
>>>>>> --
>>>>>> 2.39.0
>>>>>>
More information about the Intel-xe
mailing list