<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<p style="font-family:Arial;font-size:10pt;color:#0000FF;margin:15pt;" align="Left">
[AMD Official Use Only]<br>
</p>
<br>
<div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);">
Parsing over 550 processes for fdinfo is taking between 40-100ms single threaded in a 2GHz skylake IBRS within a VM using simple string comparisons and DIRent parsing. And that is pretty much the worst case scenario with some more optimized implementations.</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);">
<br>
</div>
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);">
David</div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Daniel Vetter <daniel@ffwll.ch><br>
<b>Sent:</b> Wednesday, May 19, 2021 11:23 AM<br>
<b>To:</b> Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com><br>
<b>Cc:</b> Daniel Stone <daniel@fooishbar.org>; jhubbard@nvidia.com <jhubbard@nvidia.com>; nouveau@lists.freedesktop.org <nouveau@lists.freedesktop.org>; Intel Graphics Development <Intel-gfx@lists.freedesktop.org>; Maling list - DRI developers <dri-devel@lists.freedesktop.org>;
Simon Ser <contact@emersion.fr>; Koenig, Christian <Christian.Koenig@amd.com>; aritger@nvidia.com <aritger@nvidia.com>; Nieto, David M <David.Nieto@amd.com><br>
<b>Subject:</b> Re: [Intel-gfx] [PATCH 0/7] Per client engine busyness</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">On Wed, May 19, 2021 at 6:16 PM Tvrtko Ursulin<br>
<tvrtko.ursulin@linux.intel.com> wrote:<br>
><br>
><br>
> On 18/05/2021 10:40, Tvrtko Ursulin wrote:<br>
> ><br>
> > On 18/05/2021 10:16, Daniel Stone wrote:<br>
> >> Hi,<br>
> >><br>
> >> On Tue, 18 May 2021 at 10:09, Tvrtko Ursulin<br>
> >> <tvrtko.ursulin@linux.intel.com> wrote:<br>
> >>> I was just wondering if stat(2) and a chrdev major check would be a<br>
> >>> solid criteria to more efficiently (compared to parsing the text<br>
> >>> content) detect drm files while walking procfs.<br>
> >><br>
> >> Maybe I'm missing something, but is the per-PID walk actually a<br>
> >> measurable performance issue rather than just a bit unpleasant?<br>
> ><br>
> > Per pid and per each open fd.<br>
> ><br>
> > As said in the other thread what bothers me a bit in this scheme is that<br>
> > the cost of obtaining GPU usage scales based on non-GPU criteria.<br>
> ><br>
> > For use case of a top-like tool which shows all processes this is a<br>
> > smaller additional cost, but then for a gpu-top like tool it is somewhat<br>
> > higher.<br>
><br>
> To further expand, not only cost would scale per pid multiplies per open<br>
> fd, but to detect which of the fds are DRM I see these three options:<br>
><br>
> 1) Open and parse fdinfo.<br>
> 2) Name based matching ie /dev/dri/.. something.<br>
> 3) Stat the symlink target and check for DRM major.<br>
<br>
stat with symlink following should be plenty fast.<br>
<br>
> All sound quite sub-optimal to me.<br>
><br>
> Name based matching is probably the least evil on system resource usage<br>
> (Keeping the dentry cache too hot? Too many syscalls?), even though<br>
> fundamentally I don't it is the right approach.<br>
><br>
> What happens with dup(2) is another question.<br>
<br>
We need benchmark numbers showing that on anything remotely realistic<br>
it's an actual problem. Until we've demonstrated it's a real problem<br>
we don't need to solve it.<br>
<br>
E.g. top with any sorting enabled also parses way more than it<br>
displays on every update. It seems to be doing Just Fine (tm).<br>
<br>
> Does anyone have any feedback on the /proc/<pid>/gpu idea at all?<br>
<br>
When we know we have a problem to solve we can take a look at solutions.<br>
-Daniel<br>
-- <br>
Daniel Vetter<br>
Software Engineer, Intel Corporation<br>
<a href="https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&data=04%7C01%7CDavid.Nieto%40amd.com%7Cf6aea97532cf41f916de08d91af32cc1%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637570453997158377%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=4CFrY9qWbJREcIcSzeO9KIn2P%2Fw6k%2BYdNlh6rdS%2BEh4%3D&reserved=0">https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2F&data=04%7C01%7CDavid.Nieto%40amd.com%7Cf6aea97532cf41f916de08d91af32cc1%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637570453997158377%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=4CFrY9qWbJREcIcSzeO9KIn2P%2Fw6k%2BYdNlh6rdS%2BEh4%3D&reserved=0</a><br>
</div>
</span></font></div>
</div>
</body>
</html>