[PATCH 1/2] drm/sched: add WARN_ON and BUG_ON to drm_sched_fini

Dave Airlie airlied at gmail.com
Fri Nov 8 04:00:04 UTC 2024


On Wed, 18 Sept 2024 at 23:48, Christian König
<ckoenig.leichtzumerken at gmail.com> wrote:
>
> Tearing down the scheduler with jobs still on the pending list can
> lead to use after free issues. Add a warning if drivers try to
> destroy a scheduler which still has work pushed to the HW.
>
> When there are still entities with jobs the situation is even worse
> since the dma_fences for those jobs can never signal we can just
> choose between potentially locking up core memory management and
> random memory corruption. When drivers really mess it up that well
> let them run into a BUG_ON().

I've been talking a bit to Phillip about this offline.

I'm not sure we should ever be BUG_ON here, I think it might be an
extreme answer, considering we are saying blocking userspace to let
things finish is bad, I think hitting this would be much worse.

I'd rather we WARN_ON, and consider just saying screw it and block
userspace close.

If we really want to avoid the hang on close possibility (though I'm
mostly fine with that) then I think Sima's option to just keep a
reference on the driver, let userspace close and try and clean things
up on fences in the driver later.

I think this should be at least good for rust lifetimes.

Having an explicit memory leak is bad, having a managed memory leak is
less bad, because at least all the memory is still pointed to by
something and managed, at a guess we'd have to keep the whole driver
and scheduler around, to avoid having to make special free functions.
Unless there is some concept like TTM ghost objects we could get away
with, but I think having to signal the fence means we should keep all
the pieces.

Dave.

>
> Signed-off-by: Christian König <christian.koenig at amd.com>
> ---
>  drivers/gpu/drm/scheduler/sched_main.c | 19 ++++++++++++++++++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
> index f093616fe53c..8a46fab5cdc8 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -1333,17 +1333,34 @@ void drm_sched_fini(struct drm_gpu_scheduler *sched)
>
>         drm_sched_wqueue_stop(sched);
>
> +       /*
> +        * Tearing down the scheduler wile there are still unprocessed jobs can
> +        * lead to use after free issues in the scheduler fence.
> +        */
> +       WARN_ON(!list_empty(&sched->pending_list));
> +
>         for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) {
>                 struct drm_sched_rq *rq = sched->sched_rq[i];
>
>                 spin_lock(&rq->lock);
> -               list_for_each_entry(s_entity, &rq->entities, list)
> +               list_for_each_entry(s_entity, &rq->entities, list) {
> +                       /*
> +                        * The justification for this BUG_ON() is that tearing
> +                        * down the scheduler while jobs are pending leaves
> +                        * dma_fences unsignaled. Since we have dependencies
> +                        * from the core memory management to eventually signal
> +                        * dma_fences this can trivially lead to a system wide
> +                        * stop because of a locked up memory management.
> +                        */
> +                       BUG_ON(spsc_queue_count(&s_entity->job_queue));
> +
>                         /*
>                          * Prevents reinsertion and marks job_queue as idle,
>                          * it will removed from rq in drm_sched_entity_fini
>                          * eventually
>                          */
>                         s_entity->stopped = true;
> +               }
>                 spin_unlock(&rq->lock);
>                 kfree(sched->sched_rq[i]);
>         }
> --
> 2.34.1
>


More information about the dri-devel mailing list