[PATCH RFC v4 07/16] drm, cgroup: Add total GEM buffer allocation limit
Michal Koutný
mkoutny at suse.com
Tue Oct 1 14:30:01 UTC 2019
Hello.
On Thu, Aug 29, 2019 at 02:05:24AM -0400, Kenny Ho <Kenny.Ho at amd.com> wrote:
> drm.buffer.default
> A read-only flat-keyed file which exists on the root cgroup.
> Each entry is keyed by the drm device's major:minor.
>
> Default limits on the total GEM buffer allocation in bytes.
What is the purpose of this attribute (and alikes for other resources)?
I can't see it being set differently but S64_MAX in
drmcg_device_early_init.
> +static ssize_t drmcg_limit_write(struct kernfs_open_file *of, char *buf,
> [...]
> + switch (type) {
> + case DRMCG_TYPE_BO_TOTAL:
> + p_max = parent == NULL ? S64_MAX :
> + parent->dev_resources[minor]->
> + bo_limits_total_allocated;
> +
> + rc = drmcg_process_limit_s64_val(sattr, true,
> + props->bo_limits_total_allocated_default,
> + p_max,
> + &val);
IIUC, this allows initiating the particular limit value based either on
parent or the default per-device value. This is alas rather an
antipattern. The most stringent limit on the path from a cgroup to the
root should be applied at the charging time. However, the child should
not inherit the verbatim value from the parent (may race with parent and
it won't be updated upon parent change).
You already do the appropriate hierarchical check in
drmcg_try_chb_bo_alloc, so the parent propagation could be simply
dropped if I'm not mistaken.
Also, I can't find how the read of
parent->dev_resources[minor]->bo_limits_total_allocated and its
concurrent update are synchronized (i.e. someone writing
buffer.total.max for parent and child in parallel). (It may just my
oversight.)
I'm posting this to the buffer knobs patch but similar applies to lgpu
resource controls as well.
HTH,
Michal
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: Digital signature
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20191001/2463d6f7/attachment-0001.sig>
More information about the amd-gfx
mailing list