[Intel-gfx] [PATCH v6 0/5] drm/i915: Expose more GPU properties through sysfs

Daniel Vetter daniel at ffwll.ch
Tue Dec 12 15:18:12 UTC 2017


On Tue, Dec 12, 2017 at 11:19:47AM +0000, Tvrtko Ursulin wrote:
> 
> On 11/12/2017 21:05, Daniel Vetter wrote:
> > On Mon, Dec 11, 2017 at 02:38:53PM +0000, Tvrtko Ursulin wrote:
> > > On 11/12/2017 10:50, Joonas Lahtinen wrote:
> > > > + Daniel, Chris
> > > > 
> > > > On Thu, 2017-12-07 at 09:21 +0000, Tvrtko Ursulin wrote:
> > > > > On 04/12/2017 15:02, Lionel Landwerlin wrote:
> > > > > > Hi,
> > > > > > 
> > > > > > After discussion with Chris, Joonas & Tvrtko, this series adds an
> > > > > > additional commit to link the render node back to the card through a
> > > > > > symlink. Making it obvious from an application using a render node to
> > > > > > know where to get the information it needs.
> > > > > 
> > > > > Important thing to mention as well is that it is trivial to get from the
> > > > > master drm fd to the sysfs root, via fstat and opendir
> > > > > /sys/dev/char/<major>:<minor>. With the addition of the card symlink to
> > > > > render nodes it is trivial for render node fd as well.
> > > > > 
> > > > > I am happy with this approach - it is extensible, flexible and avoids
> > > > > issues with ioctl versioning or whatnot. With one value per file it is
> > > > > trivial for userspace to access.
> > > > > 
> > > > > So for what I'm concerned, given how gputop would use all of this and so
> > > > > be the userspace, if everyone else is happy, I think we could do a
> > > > > detailed review and prehaps also think about including gputop in some
> > > > > distribution to make the case 100% straightforward.
> > > > 
> > > > For the GPU topology I agree this is the right choice, it's going to be
> > > > about topology after all, and directory tree is the perfect candidate.
> > > > And if a new platform appears, then it's a new platform and may change
> > > > the topology well the hardware topology has changed.
> > > > 
> > > > For the engine enumeration, I'm not equally sold for sysfs exposing it.
> > > > It's a "linear list of engine instances with flags" how the userspace
> > > > is going to be looking at them. And it's also information about what to
> > > > pass to an IOCTL as arguments after decision has been made, and then
> > > > you already have the FD you know you'll be dealing with, at hand. So
> > > > another IOCTL for that seems more convenient.
> > > 
> > > Apart from more flexibility and easier to extend, sysfs might be a better
> > > fit for applications which do not otherwise need a drm fd. Say a top-like
> > > tool which shows engine utilization, or those patches I RFC-ed recently
> > > which do the same but per DRM client.
> > > 
> > > Okay, these stats are now available also via PMU so the argument is not the
> > > strongest I admit, but I still find it quite neat. It also might allow us to
> > > define our own policy with regards to needed privilege to access these
> > > stats, and not be governed by the perf API rules.
> > 
> > How exactly is sysfs easier to extend than ioctl? There's lots of
> 
> Easier as in no need to version, add has_this/has_that markers, try to guess
> today how big a field for something might be needed in the future and
> similar.

You can e.g. return a per-generation structure easily from ioctl to, if
that's what you want. And sysfs also needs some kind of versioning, it's
just much more implicit: Instead of has_foo you check for the existence of
the file. But fundamentally it's all the same, and all the same fun with
backwards compatibility, forever, applies.

> > serializing and deserializing going on, ime that's more boilerplate. Imo
> > the only reason for sysfs is when you _must_ access it without having an
> > fd to the gpu. The inverse is generally not true (i.e. using sysfs when
> > you have the fd already), and as soon as you add a writeable field here
> > you're screwed (because sysfs can't be passed to anyone else but root,
> > compared to drm fd - viz the backlight fiasco).
> 
> I would perhaps expand the "must access without having a drm fd" to "more
> convenient to access without a drm fd". My use case here was the per-client
> engine usage stats. Equivalence with /proc/<pid>/stat, or even /proc/stat if
> you want. So I was interested in creating a foothold in sysfs for that
> purpose.
> 
> No writable fields were imagined in all these two to three use cases.

Yeah that's definitely a requirement for sysfs.

> > But even without writeable fields: Think of highly contained
> > containers/clients which only get the drm fd to render. If sysfs is gone,
> > will they fall on their faces? Atm "drm fd is all you need" is very deeply
> > ingrained into our various OS stacks.
> 
> Okay, this is something which was mentioned but I think the answer was
> unclear. If current stacks do work without sysfs then this is a good
> argument to keep that ability.
> 
> As I said I am OK to drop the engine info bits from this series. Question
> for Lionel, gpu-top and Mesa then is whether sysfs works for them, for the
> remaining topology information. Attractiveness of sysfs there was that it
> looked easier to be future proof for more or less hypothetical future
> topologies.

Still not sure whether sysfs is actually easier, or just deceiptively
looking easier than ioctls. But in the end this still looks reasonable
overall, so up to you all.

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the Intel-gfx mailing list