[PATCH v1 1/6] drm/lima: fix devfreq refcount imbalance for job timeouts

Erico Nunes nunes.erico at gmail.com
Tue Jan 23 23:19:16 UTC 2024


On Fri, Jan 19, 2024 at 2:50 AM Qiang Yu <yuq825 at gmail.com> wrote:
>
> On Thu, Jan 18, 2024 at 7:14 PM Erico Nunes <nunes.erico at gmail.com> wrote:
> >
> > On Thu, Jan 18, 2024 at 2:36 AM Qiang Yu <yuq825 at gmail.com> wrote:
> > >
> > > So this is caused by same job trigger both done and timeout handling?
> > > I think a better way to solve this is to make sure only one handler
> > > (done or timeout) process the job instead of just making lima_pm_idle()
> > > unique.
> >
> > It's not very clear to me how to best ensure that, with the drm_sched
> > software timeout and the irq happening potentially at the same time.
> This could be done by stopping scheduler run more job and disable
> GP/PP interrupt. Then after sync irq, there should be no more new
> irq gets in when we handling timeout.
>
> > I think patch 4 in this series describes and covers the most common
> > case that this would be hit. So maybe now this patch could be dropped
> > in favour of just that one.
> Yes.

After dropping the patch while testing to send v2, I managed to
reproduce this issue again.
Looking at a trace it seems that this actually happened with the test workload:
lima_sched_timedout_job -> (fence by is not signaled and new fence
check is passed) -> irq happens preempting lima_sched_timedout_job,
fence is signaled and lima_pm_idle called once at
lima_sched_pipe_task_done -> lima_sched_timedout_job continues and
calls lima_pm_idle again

So I think we still need this patch to at least prevent the bug with
the current software timeout. If we move to the hardware watchdog
timeout later we might be able to remove it anyway, but that will
still require separate work and testing.

Erico


More information about the dri-devel mailing list