[RFC PATCH v3 04/11] drm, cgroup: Add total GEM buffer allocation limit

Daniel Vetter daniel at ffwll.ch
Thu Jun 27 05:43:20 UTC 2019


On Wed, Jun 26, 2019 at 06:41:32PM -0400, Kenny Ho wrote:
> On Wed, Jun 26, 2019 at 5:41 PM Daniel Vetter <daniel at ffwll.ch> wrote:
> > On Wed, Jun 26, 2019 at 05:27:48PM -0400, Kenny Ho wrote:
> > > On Wed, Jun 26, 2019 at 12:05 PM Daniel Vetter <daniel at ffwll.ch> wrote:
> > > > So what happens when you start a lot of threads all at the same time,
> > > > allocating gem bo? Also would be nice if we could roll out at least the
> > > > accounting part of this cgroup to all GEM drivers.
> > >
> > > When there is a large number of allocation, the allocation will be
> > > checked in sequence within a device (since I used a per device mutex
> > > in the check.)  Are you suggesting the overhead here is significant
> > > enough to be a bottleneck?  The accounting part should be available to
> > > all GEM drivers (unless I missed something) since the chg and unchg
> > > function is called via the generic drm_gem_private_object_init and
> > > drm_gem_object_release.
> >
> > thread 1: checks limits, still under the total
> >
> > thread 2: checks limits, still under the total
> >
> > thread 1: allocates, still under
> >
> > thread 2: allocates, now over the limit
> >
> > I think the check and chg need to be one step, or this wont work. Or I'm
> > missing something somewhere.
> 
> Ok, I see what you are saying.
> 
> > Wrt rolling out the accounting for all drivers: Since you also roll out
> > enforcement in this patch I'm not sure whether the accounting part is
> > fully stand-alone. And as discussed a bit on an earlier patch, I think for
> > DRIVER_GEM we should set up the accounting cgroup automatically.
> I think I should be able to split the commit and restructure things a bit.
> 
> > > > What's the underlying technical reason for not allowing sharing across
> > > > cgroups?
> > > To be clear, sharing across cgroup is allowed, the buffer just needs
> > > to be allocated by a process that is parent to the cgroup.  So in the
> > > case of xorg allocating buffer for client, the xorg would be in the
> > > root cgroup and the buffer can be passed around by different clients
> > > (in root or other cgroup.)  The idea here is to establish some form of
> > > ownership, otherwise there wouldn't be a way to account for or limit
> > > the usage.
> >
> > But why? What's the problem if I allocate something and then hand it to
> > someone else. E.g. one popular use of cgroups is to isolate clients, so
> > maybe you'd do a cgroup + namespace for each X11 client (ok wayland, with
> > X11 this is probably pointless).
> >
> > But with your current limitation those clients can't pass buffers to the
> > compositor anymore, making cgroups useless. Your example here only works
> > if Xorg is in the root and allocates all the buffers. That's not even true
> > for DRI3 anymore.
> >
> > So pretty serious limitation on cgroups, and I'm not really understanding
> > why we need this. I think if we want to prevent buffer sharing, what we
> > need are some selinux hooks and stuff so you can prevent an import/access
> > by someone who's not allowed to touch a buffer. But that kind of access
> > right management should be separate from resource control imo.
> So without the sharing restriction and some kind of ownership
> structure, we will have to migrate/change the owner of the buffer when
> the cgroup that created the buffer die before the receiving cgroup(s)
> and I am not sure how to do that properly at the moment.  1) Should
> each cgroup keep track of all the buffers that belongs to it and
> migrate?  (Is that efficient?)  2) which cgroup should be the new
> owner (and therefore have the limit applied?)  Having the creator
> being the owner is kind of natural, but let say the buffer is shared
> with 5 other cgroups, which of these 5 cgroups should be the new owner
> of the buffer?

Different answers:

- Do we care if we leak bos like this in a cgroup, if the cgroup
  disappears before all the bo are cleaned up?

- Just charge the bo to each cgroup it's in? Will be quite a bit more
  tracking needed to get that done ...

- Also, there's the legacy way of sharing a bo, with the FLINK and
  GEM_OPEN ioctls. We need to plug these holes too.

Just feels like your current solution is technically well-justified, but
it completely defeats the point of cgroups/containers and buffer sharing
...

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the amd-gfx mailing list