[Freedreno] [RFC 2/3] drm/msm: Rework get_comm_cmdline() helper
Daniel Vetter
daniel at ffwll.ch
Thu Apr 27 09:39:51 UTC 2023
On Fri, Apr 21, 2023 at 07:47:26AM -0700, Rob Clark wrote:
> On Fri, Apr 21, 2023 at 2:33 AM Emil Velikov <emil.l.velikov at gmail.com> wrote:
> >
> > Greeting all,
> >
> > Sorry for the delay - Easter Holidays, food coma and all that :-)
> >
> > On Tue, 18 Apr 2023 at 15:31, Rob Clark <robdclark at gmail.com> wrote:
> > >
> > > On Tue, Apr 18, 2023 at 1:34 AM Daniel Vetter <daniel at ffwll.ch> wrote:
> > > >
> > > > On Tue, Apr 18, 2023 at 09:27:49AM +0100, Tvrtko Ursulin wrote:
> > > > >
> > > > > On 17/04/2023 21:12, Rob Clark wrote:
> > > > > > From: Rob Clark <robdclark at chromium.org>
> > > > > >
> > > > > > Make it work in terms of ctx so that it can be re-used for fdinfo.
> > > > > >
> > > > > > Signed-off-by: Rob Clark <robdclark at chromium.org>
> > > > > > ---
> > > > > > drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 ++--
> > > > > > drivers/gpu/drm/msm/msm_drv.c | 2 ++
> > > > > > drivers/gpu/drm/msm/msm_gpu.c | 13 ++++++-------
> > > > > > drivers/gpu/drm/msm/msm_gpu.h | 12 ++++++++++--
> > > > > > drivers/gpu/drm/msm/msm_submitqueue.c | 1 +
> > > > > > 5 files changed, 21 insertions(+), 11 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > > > > > index bb38e728864d..43c4e1fea83f 100644
> > > > > > --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > > > > > +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c
> > > > > > @@ -412,7 +412,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx,
> > > > > > /* Ensure string is null terminated: */
> > > > > > str[len] = '\0';
> > > > > > - mutex_lock(&gpu->lock);
> > > > > > + mutex_lock(&ctx->lock);
> > > > > > if (param == MSM_PARAM_COMM) {
> > > > > > paramp = &ctx->comm;
> > > > > > @@ -423,7 +423,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx,
> > > > > > kfree(*paramp);
> > > > > > *paramp = str;
> > > > > > - mutex_unlock(&gpu->lock);
> > > > > > + mutex_unlock(&ctx->lock);
> > > > > > return 0;
> > > > > > }
> > > > > > diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c
> > > > > > index 3d73b98d6a9c..ca0e89e46e13 100644
> > > > > > --- a/drivers/gpu/drm/msm/msm_drv.c
> > > > > > +++ b/drivers/gpu/drm/msm/msm_drv.c
> > > > > > @@ -581,6 +581,8 @@ static int context_init(struct drm_device *dev, struct drm_file *file)
> > > > > > rwlock_init(&ctx->queuelock);
> > > > > > kref_init(&ctx->ref);
> > > > > > + ctx->pid = get_pid(task_pid(current));
> > > > >
> > > > > Would it simplify things for msm if DRM core had an up to date file->pid as
> > > > > proposed in
> > > > > https://patchwork.freedesktop.org/patch/526752/?series=109902&rev=4 ? It
> > > > > gets updated if ioctl issuer is different than fd opener and this being
> > > > > context_init here reminded me of it. Maybe you wouldn't have to track the
> > > > > pid in msm?
> > >
> > > The problem is that we also need this for gpu devcore dumps, which
> > > could happen after the drm_file is closed. The ctx can outlive the
> > > file.
> > >
> > I think we all kept forgetting about that. MSM had support for ages,
> > while AMDGPU is the second driver to land support - just a release
> > ago.
> >
> > > But the ctx->pid has the same problem as the existing file->pid when
> > > it comes to Xorg.. hopefully over time that problem just goes away.
> >
> > Out of curiosity: what do you mean with "when it comes to Xorg" - the
> > "was_master" handling or something else?
>
> The problem is that Xorg is the one to open the drm fd, and then
> passes the fd to the client.. so the pid of drm_file is the Xorg pid,
> not the client. Making it not terribly informative.
>
> Tvrtko's patch he linked above would address that for drm_file, but
> not for other driver internal usages. Maybe it could be wired up as a
> helper so that drivers don't have to re-invent that dance. Idk, I
> have to think about it.
>
> Btw, with my WIP drm sched fence signalling patch lockdep is unhappy
> when gpu devcore dumps are triggered. I'm still pondering how to
> decouple the locking so that anything coming from fs (ie.
> show_fdinfo()) is decoupled from anything that happens in the fence
> signaling path. But will repost this series once I get that sorted
> out.
So the cleanest imo is that you push most of the capturing into a worker
that's entirely decoupled. If you have terminal context (i.e. on first
hang they stop all further cmd submission, which is anyway what
vk/arb_robustness want), then you don't have to capture at tdr time,
because there's no subsequent batch that will wreck the state.
But it only works if your gpu ctx don't have recoverable semantics.
If you can't do that it's a _lot_ of GFP_ATOMIC and trylock and bailing
out if any fails :-/
-Daniel
>
> BR,
> -R
>
> >
> > > guess I could do a similar dance to your patch to update the pid
> > > whenever (for ex) a submitqueue is created.
> > >
> > > > Can we go one step further and let the drm fdinfo stuff print these new
> > > > additions? Consistency across drivers and all that.
> > >
> > > Hmm, I guess I could _also_ store the overridden comm/cmdline in
> > > drm_file. I still need to track it in ctx (msm_file_private) because
> > > I could need it after the file is closed.
> > >
> > > Maybe it could be useful to have a gl extension to let the app set a
> > > name on the context so that this is useful beyond native-ctx (ie.
> > > maybe it would be nice to see that "chrome: lwn.net" is using less gpu
> > > memory than "chrome: phoronix.com", etc)
> > >
> >
> > /me awaits for the series to hit the respective websites ;-)
> >
> > But seriously - the series from Tvrtko (thanks for the link, will
> > check in a moment) makes sense. Although given the livespan issue
> > mentioned above, I don't think it's applicable here.
> >
> > So if it were me, I would consider the two orthogonal for the
> > short/mid term. Fwiw this and patch 1/3 are:
> > Reviewed-by: Emil Velikov <emil.l.velikov at gmail.com>
> >
> > HTH
> > -Emil
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the Freedreno
mailing list