[RFC PATCH v3 02/11] cgroup: Add mechanism to register DRM devices

Kenny Ho y2kenny at gmail.com
Wed Jun 26 21:58:19 UTC 2019


On Wed, Jun 26, 2019 at 5:04 PM Daniel Vetter <daniel at ffwll.ch> wrote:
> On Wed, Jun 26, 2019 at 10:37 PM Kenny Ho <y2kenny at gmail.com> wrote:
> > (sending again, I keep missing the reply-all in gmail.)
> You can make it the default somewhere in the gmail options.
Um... interesting, my option was actually not set (neither reply or reply-all.)

> > On Wed, Jun 26, 2019 at 11:56 AM Daniel Vetter <daniel at ffwll.ch> wrote:
> > >
> > > Why the separate, explicit registration step? I think a simpler design for
> > > drivers would be that we set up cgroups if there's anything to be
> > > controlled, and then for GEM drivers the basic GEM stuff would be set up
> > > automically (there's really no reason not to I think).
> >
> > Is this what you mean with the comment about drm_dev_register below?
> > I think I understand what you are saying but not super clear.  Are you
> > suggesting the use of driver feature bits (drm_core_check_feature,
> > etc.) similar to the way Brian Welty did in his proposal in May?
>
> Also not exactly a fan of driver feature bits tbh. What I had in mind was:
>
> - For stuff like the GEM accounting which we can do for all drivers
> easily (we can't do the enforcment, that needs a few changes), just
> roll it out for everyone. I.e. if you enable the DRMCG Kconfig, all
> DRIVER_GEM would get that basic gem cgroup accounting.
>
> - for other bits the driver just registers certain things, like "I can
> enforce gem limits" or "I have gpu memory regions vram, tt, and system
> and can enforce them" in their normal driver setup. Then at
> drm_dev_register time we register all these additional cgroups, like
> we today register all the other interafaces and pieces of a drm_device
> (drm_minor, drm_connectors, debugfs files, sysfs stuff, all these
> things).
>
> Since the concepts are still a bit in flux, let's take an example from
> the modeset side:
> - driver call drm_connector_init() to create connector object
> - drm_dev_register() also sets up all the public interfaces for that
> connector (debugfs, sysfs, ...)
>
> I think a similar setup would be good for cgroups here, you just
> register your special ttm_mem_reg or whatever, and the magic happens
> automatically.

Ok, I will look into those (I am not too familiar about those at this point.)

> > > I have no idea, but is this guaranteed to get them all?
> >
> > I believe so, base on my understanding about
> > css_for_each_descendant_pre and how I am starting from the root
> > cgroup.  Hopefully I didn't miss anything.
>
> Well it's rcu, so I expect it'll race with concurrent
> addition/removal. And the kerneldoc has some complicated sounding
> comments about how to synchronize that with some locks that I don't
> fully understand, but I think you're also not having any additional
> locking so not sure this all works correctly ...
>
> Do we still need the init_dmcgrp stuff if we'd just embedd? That would
> probably be the simplest way to solve this all :-)

I will need to dig into it a bit more to know for sure.  I think I
still need the init_drmcgrp stuff. I implemented it like this because
the cgroup subsystem appear to be initialized before the drm subsystem
so the root cgroup does not know any drm devices and the per device
default limits are not set.  In theory, I should only need to set the
root cgroup (so I don't need to use css_for_each_descendant_pre, which
requires the rcu_lock.)  But I am not 100% confident there won't be
any additional cgroup being added to the hierarchy between cgroup
subsystem init and drm subsystem init.

Alternatively I can protect it with an additional mutex but I am not
sure if that's needed.

Regards,
Kenny


More information about the amd-gfx mailing list