[PATCH 0/7] Per client engine busyness

Christian König christian.koenig at amd.com
Mon May 17 19:16:30 UTC 2021


Am 17.05.21 um 16:30 schrieb Daniel Vetter:
> [SNIP]
>>>> Could be that i915 has some special code for that, but on my laptop
>>>> I only see the X server under the "clients" debugfs file.
>>> Yes we have special code in i915 for this. Part of this series we are
>>> discussing here.
>> Ah, yeah you should mention that. Could we please separate that into common
>> code instead? Cause I really see that as a bug in the current handling
>> independent of the discussion here.
>>
>> As far as I know all IOCTLs go though some common place in DRM anyway.
> Yeah, might be good to fix that confusion in debugfs. But since that's
> non-uapi, I guess no one ever cared (enough).

Well we cared, problem is that we didn't know how to fix it properly and 
pretty much duplicated it in the VM code :)

>>> For the use case of knowing which DRM file is using how much GPU time on
>>> engine X we do not need to walk all open files either with my sysfs
>>> approach or the proc approach from Chris. (In the former case we
>>> optionally aggregate by PID at presentation time, and in the latter case
>>> aggregation is implicit.)
>> I'm unsure if we should go with the sysfs, proc or some completely different
>> approach.
>>
>> In general it would be nice to have a way to find all the fd references for
>> an open inode.
> Yeah, but that maybe needs to be an ioctl or syscall or something on the
> inode, that givey you a list of (procfd, fd_nr) pairs pointing back at all
> open files? If this really is a real world problem, but given that
> top/lsof and everyone else hasn't asked for it yet maybe it's not.

Well has anybody already measured how much overhead it would be to 
iterate over the relevant data structures in the kernel instead of 
userspace?

I mean we don't really need the tracking when a couple of hundred fd 
tables can be processed in just a few ms because of lockless RCU protection.

> Also I replied in some other thread, I really like the fdinfo stuff, and I
> think trying to somewhat standardized this across drivers would be neat.
> Especially since i915 is going to adopt drm/scheduler for front-end
> scheduling too, so at least some of this should be fairly easy to share.

Yeah, that sounds like a good idea to me as well.

Regards,
Christian.

>
> Cheers, Daniel



More information about the dri-devel mailing list