[PATCH v3] drm/scheduler re-insert Bailing job to avoid memleak
Steven Price
steven.price at arm.com
Fri Mar 26 09:07:53 UTC 2021
On 26/03/2021 02:04, Zhang, Jack (Jian) wrote:
> [AMD Official Use Only - Internal Distribution Only]
>
> Hi, Steve,
>
> Thank you for your detailed comments.
>
> But currently the patch is not finalized.
> We found some potential race condition even with this patch. The solution is under discussion and hopefully we could find an ideal one.
> After that, I will start to consider other drm-driver if it will influence other drivers(except for amdgpu).
No problem. Please keep me CC'd, the suggestion of using reference
counts may be beneficial for Panfrost as we already build a reference
count on top of struct drm_sched_job. So there may be scope for cleaning
up Panfrost afterwards even if your work doesn't directly affect it.
Thanks,
Steve
> Best,
> Jack
>
> -----Original Message-----
> From: Steven Price <steven.price at arm.com>
> Sent: Monday, March 22, 2021 11:29 PM
> To: Zhang, Jack (Jian) <Jack.Zhang1 at amd.com>; dri-devel at lists.freedesktop.org; amd-gfx at lists.freedesktop.org; Koenig, Christian <Christian.Koenig at amd.com>; Grodzovsky, Andrey <Andrey.Grodzovsky at amd.com>; Liu, Monk <Monk.Liu at amd.com>; Deng, Emily <Emily.Deng at amd.com>; Rob Herring <robh at kernel.org>; Tomeu Vizoso <tomeu.vizoso at collabora.com>
> Subject: Re: [PATCH v3] drm/scheduler re-insert Bailing job to avoid memleak
>
> On 15/03/2021 05:23, Zhang, Jack (Jian) wrote:
>> [AMD Public Use]
>>
>> Hi, Rob/Tomeu/Steven,
>>
>> Would you please help to review this patch for panfrost driver?
>>
>> Thanks,
>> Jack Zhang
>>
>> -----Original Message-----
>> From: Jack Zhang <Jack.Zhang1 at amd.com>
>> Sent: Monday, March 15, 2021 1:21 PM
>> To: dri-devel at lists.freedesktop.org; amd-gfx at lists.freedesktop.org;
>> Koenig, Christian <Christian.Koenig at amd.com>; Grodzovsky, Andrey
>> <Andrey.Grodzovsky at amd.com>; Liu, Monk <Monk.Liu at amd.com>; Deng, Emily
>> <Emily.Deng at amd.com>
>> Cc: Zhang, Jack (Jian) <Jack.Zhang1 at amd.com>
>> Subject: [PATCH v3] drm/scheduler re-insert Bailing job to avoid
>> memleak
>>
>> re-insert Bailing jobs to avoid memory leak.
>>
>> V2: move re-insert step to drm/scheduler logic
>> V3: add panfrost's return value for bailing jobs in case it hits the
>> memleak issue.
>
> This commit message could do with some work - it's really hard to decipher what the actual problem you're solving is.
>
>>
>> Signed-off-by: Jack Zhang <Jack.Zhang1 at amd.com>
>> ---
>> drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 4 +++-
>> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 8 ++++++--
>> drivers/gpu/drm/panfrost/panfrost_job.c | 4 ++--
>> drivers/gpu/drm/scheduler/sched_main.c | 8 +++++++-
>> include/drm/gpu_scheduler.h | 1 +
>> 5 files changed, 19 insertions(+), 6 deletions(-)
>>
> [...]
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c
>> b/drivers/gpu/drm/panfrost/panfrost_job.c
>> index 6003cfeb1322..e2cb4f32dae1 100644
>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c
>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c
>> @@ -444,7 +444,7 @@ static enum drm_gpu_sched_stat panfrost_job_timedout(struct drm_sched_job
>> * spurious. Bail out.
>> */
>> if (dma_fence_is_signaled(job->done_fence))
>> -return DRM_GPU_SCHED_STAT_NOMINAL;
>> +return DRM_GPU_SCHED_STAT_BAILING;
>>
>> dev_err(pfdev->dev, "gpu sched timeout, js=%d, config=0x%x, status=0x%x, head=0x%x, tail=0x%x, sched_job=%p",
>> js,
>> @@ -456,7 +456,7 @@ static enum drm_gpu_sched_stat
>> panfrost_job_timedout(struct drm_sched_job
>>
>> /* Scheduler is already stopped, nothing to do. */
>> if (!panfrost_scheduler_stop(&pfdev->js->queue[js], sched_job))
>> -return DRM_GPU_SCHED_STAT_NOMINAL;
>> +return DRM_GPU_SCHED_STAT_BAILING;
>>
>> /* Schedule a reset if there's no reset in progress. */
>> if (!atomic_xchg(&pfdev->reset.pending, 1))
>
> This looks correct to me - in these two cases drm_sched_stop() is not called on the sched_job, so it looks like currently the job will be leaked.
>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
>> b/drivers/gpu/drm/scheduler/sched_main.c
>> index 92d8de24d0a1..a44f621fb5c4 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -314,6 +314,7 @@ static void drm_sched_job_timedout(struct work_struct *work)
>> {
>> struct drm_gpu_scheduler *sched;
>> struct drm_sched_job *job;
>> +int ret;
>>
>> sched = container_of(work, struct drm_gpu_scheduler,
>> work_tdr.work);
>>
>> @@ -331,8 +332,13 @@ static void drm_sched_job_timedout(struct work_struct *work)
>> list_del_init(&job->list);
>> spin_unlock(&sched->job_list_lock);
>>
>> -job->sched->ops->timedout_job(job);
>> +ret = job->sched->ops->timedout_job(job);
>>
>> +if (ret == DRM_GPU_SCHED_STAT_BAILING) {
>> +spin_lock(&sched->job_list_lock);
>> +list_add(&job->node, &sched->ring_mirror_list);
>> +spin_unlock(&sched->job_list_lock);
>> +}
>
> I think we could really do with a comment somewhere explaining what "bailing" means in this context. For the Panfrost case we have two cases:
>
> * The GPU job actually finished while the timeout code was running (done_fence is signalled).
>
> * The GPU is already in the process of being reset (Panfrost has multiple queues, so mostly like a bad job in another queue).
>
> I'm also not convinced that (for Panfrost) it makes sense to be adding the jobs back to the list. For the first case above clearly the job could just be freed (it's complete). The second case is more interesting and Panfrost currently doesn't handle this well. In theory the driver could try to rescue the job ('soft stop' in Mali language) so that it could be resubmitted. Panfrost doesn't currently support that, so attempting to resubmit the job is almost certainly going to fail.
>
> It's on my TODO list to look at improving Panfrost in this regard, but sadly still quite far down.
>
> Steve
>
>> /*
>> * Guilty job did complete and hence needs to be manually removed
>> * See drm_sched_stop doc.
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
>> index 4ea8606d91fe..8093ac2427ef 100644
>> --- a/include/drm/gpu_scheduler.h
>> +++ b/include/drm/gpu_scheduler.h
>> @@ -210,6 +210,7 @@ enum drm_gpu_sched_stat {
>> DRM_GPU_SCHED_STAT_NONE, /* Reserve 0 */
>> DRM_GPU_SCHED_STAT_NOMINAL,
>> DRM_GPU_SCHED_STAT_ENODEV,
>> +DRM_GPU_SCHED_STAT_BAILING,
>> };
>>
>> /**
>>
>
More information about the amd-gfx
mailing list