[PATCH v1 4/6] drm/lima: handle spurious timeouts due to high irq latency
Qiang Yu
yuq825 at gmail.com
Sun Jan 21 11:20:14 UTC 2024
On Sun, Jan 21, 2024 at 5:56 PM Hillf Danton <hdanton at sina.com> wrote:
>
> On Wed, 17 Jan 2024 04:12:10 +0100 Erico Nunes <nunes.erico at gmail.com>
> >
> > @@ -401,9 +399,33 @@ static enum drm_gpu_sched_stat lima_sched_timedout_job(struct drm_sched_job *job
> > struct lima_sched_pipe *pipe = to_lima_pipe(job->sched);
> > struct lima_sched_task *task = to_lima_task(job);
> > struct lima_device *ldev = pipe->ldev;
> > + struct lima_ip *ip = pipe->processor[0];
> > +
> > + /*
> > + * If the GPU managed to complete this jobs fence, the timeout is
> > + * spurious. Bail out.
> > + */
> > + if (dma_fence_is_signaled(task->done_fence)) {
> > + DRM_WARN("%s spurious timeout\n", lima_ip_name(ip));
> > + return DRM_GPU_SCHED_STAT_NOMINAL;
> > + }
>
> Given 500ms in lima_sched_pipe_init(), no timeout is spurious by define,
> and stop selling bandaid like this because you have options like locating
> the reasons behind timeout.
This chang do look like to aim for 2FPS apps. Maybe 500ms is too short
for week mali4x0 gpus (2FPS apps appear more likely). AMD/NV GPU uses
10s timeout. So increasing the timeout seems to be an equivalent and better
way?
> > +
> > + /*
> > + * Lima IRQ handler may take a long time to process an interrupt
> > + * if there is another IRQ handler hogging the processing.
> > + * In order to catch such cases and not report spurious Lima job
> > + * timeouts, synchronize the IRQ handler and re-check the fence
> > + * status.
> > + */
> > + synchronize_irq(ip->irq);
> > +
> > + if (dma_fence_is_signaled(task->done_fence)) {
> > + DRM_WARN("%s unexpectedly high interrupt latency\n", lima_ip_name(ip));
> > + return DRM_GPU_SCHED_STAT_NOMINAL;
> > + }
> >
> > if (!pipe->error)
> > - DRM_ERROR("lima job timeout\n");
> > + DRM_ERROR("%s lima job timeout\n", lima_ip_name(ip));
> >
> > drm_sched_stop(&pipe->base, &task->base);
> >
More information about the lima
mailing list