[PATCH v2 0/3] drm: commit_work scheduling

Ville Syrjälä ville.syrjala at linux.intel.com
Fri Oct 2 11:05:44 UTC 2020


On Fri, Oct 02, 2020 at 01:52:56PM +0300, Ville Syrjälä wrote:
> On Thu, Oct 01, 2020 at 05:25:55PM +0200, Daniel Vetter wrote:
> > On Thu, Oct 1, 2020 at 5:15 PM Rob Clark <robdclark at gmail.com> wrote:
> > >
> > > On Thu, Oct 1, 2020 at 12:25 AM Daniel Vetter <daniel at ffwll.ch> wrote:
> > > >
> > > > On Wed, Sep 30, 2020 at 11:16 PM Rob Clark <robdclark at gmail.com> wrote:
> > > > >
> > > > > From: Rob Clark <robdclark at chromium.org>
> > > > >
> > > > > The android userspace treats the display pipeline as a realtime problem.
> > > > > And arguably, if your goal is to not miss frame deadlines (ie. vblank),
> > > > > it is.  (See https://lwn.net/Articles/809545/ for the best explaination
> > > > > that I found.)
> > > > >
> > > > > But this presents a problem with using workqueues for non-blocking
> > > > > atomic commit_work(), because the SCHED_FIFO userspace thread(s) can
> > > > > preempt the worker.  Which is not really the outcome you want.. once
> > > > > the required fences are scheduled, you want to push the atomic commit
> > > > > down to hw ASAP.
> > > > >
> > > > > But the decision of whether commit_work should be RT or not really
> > > > > depends on what userspace is doing.  For a pure CFS userspace display
> > > > > pipeline, commit_work() should remain SCHED_NORMAL.
> > > > >
> > > > > To handle this, convert non-blocking commit_work() to use per-CRTC
> > > > > kthread workers, instead of system_unbound_wq.  Per-CRTC workers are
> > > > > used to avoid serializing commits when userspace is using a per-CRTC
> > > > > update loop.  And the last patch exposes the task id to userspace as
> > > > > a CRTC property, so that userspace can adjust the priority and sched
> > > > > policy to fit it's needs.
> > > > >
> > > > >
> > > > > v2: Drop client cap and in-kernel setting of priority/policy in
> > > > >     favor of exposing the kworker tid to userspace so that user-
> > > > >     space can set priority/policy.
> > > >
> > > > Yeah I think this looks more reasonable. Still a bit irky interface,
> > > > so I'd like to get some kworker/rt ack on this. Other opens:
> > > > - needs userspace, the usual drill
> > >
> > > fwiw, right now the userspace is "modetest + chrt".. *probably* the
> > > userspace will become a standalone helper or daemon, mostly because
> > > the chrome gpu-process sandbox does not allow setting SCHED_FIFO.  I'm
> > > still entertaining the possibility of switching between rt and cfs
> > > depending on what is in the foreground (ie. only do rt for android
> > > apps).
> > >
> > > > - we need this also for vblank workers, otherwise this wont work for
> > > > drivers needing those because of another priority inversion.
> > >
> > > I have a thought on that, see below..
> > 
> > Hm, not seeing anything about vblank worker below?
> > 
> > > > - we probably want some indication of whether this actually does
> > > > something useful, not all drivers use atomic commit helpers. Not sure
> > > > how to do that.
> > >
> > > I'm leaning towards converting the other drivers over to use the
> > > per-crtc kwork, and then dropping the 'commit_work` from atomic state.
> > > I can add a patch to that, but figured I could postpone that churn
> > > until there is some by-in on this whole idea.
> > 
> > i915 has its own commit code, it's not even using the current commit
> > helpers (nor the commit_work). Not sure how much other fun there is.
> 
> I don't think we want per-crtc threads for this in i915. Seems
> to me easier to guarantee atomicity across multiple crtcs if
> we just commit them from the same thread.

Oh, and we may have to commit things in a very specific order
to guarantee the hw doesn't fall over, so yeah definitely per-crtc
thread is a no go.

I don't even understand the serialization argument. If the commits
are truly independent then why isn't the unbound wq enough to avoid
the serialization? It should just spin up a new thread for each commit
no?

-- 
Ville Syrjälä
Intel


More information about the dri-devel mailing list