[PATCH 1/3] drm/xe: use devm instead of drmm for managed bo
Daniele Ceraolo Spurio
daniele.ceraolospurio at intel.com
Mon Aug 12 18:43:39 UTC 2024
On 8/12/2024 11:17 AM, Matthew Auld wrote:
> On 12/08/2024 17:38, Daniele Ceraolo Spurio wrote:
>>
>>
>> On 8/12/2024 3:41 AM, Matthew Auld wrote:
>>> On 10/08/2024 00:12, Daniele Ceraolo Spurio wrote:
>>>> The BO cleanup touches the GGTT and therefore requires the HW to be
>>>> available, so we need to use devm instead of drmm.
>>>
>>> In the BO ggtt cleanup we have drm_dev_enter() to mark the critical
>>> sections that needs HW interaction vs the bits that just touch SW
>>> stuff, but looks like this only works once we have marked the device
>>> as unplugged. If something blows up during the probe, then the mmio
>>> stuff is still unmapped and set to NULL (mmio_fini or something
>>> IIRC), but the dev_enter() still sees the device as attached as part
>>> of the later drmm and we blow up.
>>>
>>> It might make sense to tweak the driver to call the dev unplug() in
>>> the error unwind during the probe sequence, that way the
>>> drm_dev_enter() will catch this (I think). If we error out during
>>> probe, then device can be considered unplugged at the end. Or
>>> perhaps we should anyway make this change regardless of this patch?
>>>
>>> My thinking with not converting xe_managed_* over to drmm was that
>>> we anyway have to deal with userspace objects existing after the HW
>>> is removed, and there we might also have to consider ggtt, like with
>>> display surfaces. Also the BO is largely just software state and can
>>> be tied to life cycle of the driver state, but I guess here this is
>>> internal and closely tied to the operation of the HW.
>>>
>>>>
>>>> Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1160
>>>> Signed-off-by: Daniele Ceraolo Spurio
>>>> <daniele.ceraolospurio at intel.com>
>>>> Cc: Lucas De Marchi <lucas.demarchi at intel.com>
>>>> Cc: Matthew Auld <matthew.auld at intel.com>
>>>
>>> If calling unplug doesn't make sense, or is considered orthogonal
>>> and only makes sense for other drmm users:
>>
>> I'm not familiar enough with this code to know what's the better
>> choice here. I didn't even know drm_dev_enter() existed before you
>> mentioned it, but that explains why we only see this problem on probe
>> abort and not on driver remove, because we only call drm_dev_unplug
>> in the latter case. Weirdly, drm_dev_unplug is called as part of
>> xe_device_remove_display(), which makes it look like part of the
>> display cleanup instead of the more general one.
>>
>> IMO, using drmm for HW-accessing functions and relying on the fact
>> that we correctly mark the HW-touching blocks with drm_dev_enter/exit
>> seems more error prone than just using devm, so switching seems
>> safer; is there any advantage to sticking with drmm instead of
>> switching to devm?
>
> It's just that this is technically for the GEM object put path which
> is generic, and you can get here without drmm or devm, so I don't
> think we can really avoid drm_dev_enter() for these types of cases
> where you can hit the same path with the device unplugged. Maybe we
> can for the ggtt thing, but I don't think you can in general.
>
> Just to be clear, the hotunplug thing which motivated the drmm vs devm
> stuff basically ends up calling into your pci remove callback even
> though there could still be multiple open driver fd, GEM objects etc
> for that drm_device. So here the object or other resources are
> released only when the user chooses to close everything, which can be
> long after devm fires and any other stuff that happens in our remove
> callback. That seems to be part of the idea behind drm_dev_enter(),
> where you have some generic path which can be triggered also after the
> unplug and doesn't fit neatly into drmm/devm model (only makes sense
> for driver init resources).
Ok, so it looks like we definitely need to review all cleanup paths that
can be triggered from file close to make sure they all have the
drm_dev_enter/exit call. Exec_queue cleanup is the first that comes to
mind as something that might need extra checks. Also, I think some
remove paths assume that all fds have been closed already (e.g. the GuC
code will fire a warning if there are open contexts at remove time), so
this definitely needs some attention. I am going to review all the
uC-related paths as that's my area of expertise. Can you have a look at
the more generic MM paths? It's probably also going to be good to add a
call to drm_dev_unplug() in the abort path anyway, so we have additional
checks if we do something wrong.
BTW, do we have any tests that cover the unplug while objects are still
allocated? I checked core_hotunplug but it doesn't seem to cover this
scenario.
>
>>
>> If we decide to stick to drmm, we'll need to review all callbacks to
>> make sure they have the enter/exit calls where needed. E.g, the
>> permanent exec_queue cleanup (being called from both the migration
>> and the GSC drmm callbacks) does an unconditional
>> xe_pm_runtime_get/put, which seems wrong if this can be called after
>> the HW has been detached (and implies that the function can end up
>> accessing HW).
>
> Yes, if we wanted to do the full thing then there is still lots of
> stuff missing, in addition to adding all the test coverage.
The series from Matt B with the error injection should help a bit with
the testing, but we'll definitely need more.
>
>>
>> Thoughts?
>
> I think your patch is fine, but maybe it also makes sense to set all
> the bo pointers to NULL? It's easy to have some user trying to access
> the bo pointer after removal, before the drm_device is finally closed?
> Up to you though. Either way r-b.
Those are kernel objects, so it shouldn't be possible for user to access
them directly. The only thing userspace could do is trigger a kernel op
that uses one of those objects, but those should all be disabled by the
remove.
Setting the pointer to NULL would require a rework of the xe_managed_bo*
calls to actually pass in the pointer that is going to store the BO
address, which IMO is not worth the time given the above.
Daniele
>
>>
>> Daniele
>>
>>> Reviewed-by: Matthew Auld <matthew.auld at intel.com>
>>>
>>>> ---
>>>> drivers/gpu/drm/xe/xe_bo.c | 6 +++---
>>>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>>>> index 3295bc92d7aa..45652d7e6fa6 100644
>>>> --- a/drivers/gpu/drm/xe/xe_bo.c
>>>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>>>> @@ -1576,7 +1576,7 @@ struct xe_bo *xe_bo_create_from_data(struct
>>>> xe_device *xe, struct xe_tile *tile,
>>>> return bo;
>>>> }
>>>> -static void __xe_bo_unpin_map_no_vm(struct drm_device *drm, void
>>>> *arg)
>>>> +static void __xe_bo_unpin_map_no_vm(void *arg)
>>>> {
>>>> xe_bo_unpin_map_no_vm(arg);
>>>> }
>>>> @@ -1591,7 +1591,7 @@ struct xe_bo
>>>> *xe_managed_bo_create_pin_map(struct xe_device *xe, struct xe_tile
>>>> if (IS_ERR(bo))
>>>> return bo;
>>>> - ret = drmm_add_action_or_reset(&xe->drm,
>>>> __xe_bo_unpin_map_no_vm, bo);
>>>> + ret = devm_add_action_or_reset(xe->drm.dev,
>>>> __xe_bo_unpin_map_no_vm, bo);
>>>> if (ret)
>>>> return ERR_PTR(ret);
>>>> @@ -1639,7 +1639,7 @@ int xe_managed_bo_reinit_in_vram(struct
>>>> xe_device *xe, struct xe_tile *tile, str
>>>> if (IS_ERR(bo))
>>>> return PTR_ERR(bo);
>>>> - drmm_release_action(&xe->drm, __xe_bo_unpin_map_no_vm, *src);
>>>> + devm_release_action(xe->drm.dev, __xe_bo_unpin_map_no_vm, *src);
>>>> *src = bo;
>>>> return 0;
>>
More information about the Intel-xe
mailing list