[Freedreno] [PATCH v3 4/7] drm/i915: Switch to fdinfo helper

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Wed Apr 12 15:12:41 UTC 2023


On 12/04/2023 14:51, Daniel Vetter wrote:
> On Wed, Apr 12, 2023 at 01:32:43PM +0100, Tvrtko Ursulin wrote:
>>
>> On 11/04/2023 23:56, Rob Clark wrote:
>>> From: Rob Clark <robdclark at chromium.org>
>>>
>>> Signed-off-by: Rob Clark <robdclark at chromium.org>
>>> ---
>>>    drivers/gpu/drm/i915/i915_driver.c     |  3 ++-
>>>    drivers/gpu/drm/i915/i915_drm_client.c | 18 +++++-------------
>>>    drivers/gpu/drm/i915/i915_drm_client.h |  2 +-
>>>    3 files changed, 8 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_driver.c b/drivers/gpu/drm/i915/i915_driver.c
>>> index db7a86def7e2..37eacaa3064b 100644
>>> --- a/drivers/gpu/drm/i915/i915_driver.c
>>> +++ b/drivers/gpu/drm/i915/i915_driver.c
>>> @@ -1696,7 +1696,7 @@ static const struct file_operations i915_driver_fops = {
>>>    	.compat_ioctl = i915_ioc32_compat_ioctl,
>>>    	.llseek = noop_llseek,
>>>    #ifdef CONFIG_PROC_FS
>>> -	.show_fdinfo = i915_drm_client_fdinfo,
>>> +	.show_fdinfo = drm_fop_show_fdinfo,
>>>    #endif
>>>    };
>>> @@ -1796,6 +1796,7 @@ static const struct drm_driver i915_drm_driver = {
>>>    	.open = i915_driver_open,
>>>    	.lastclose = i915_driver_lastclose,
>>>    	.postclose = i915_driver_postclose,
>>> +	.show_fdinfo = i915_drm_client_fdinfo,
>>>    	.prime_handle_to_fd = drm_gem_prime_handle_to_fd,
>>>    	.prime_fd_to_handle = drm_gem_prime_fd_to_handle,
>>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c b/drivers/gpu/drm/i915/i915_drm_client.c
>>> index b09d1d386574..4a77e5e47f79 100644
>>> --- a/drivers/gpu/drm/i915/i915_drm_client.c
>>> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
>>> @@ -101,7 +101,7 @@ static u64 busy_add(struct i915_gem_context *ctx, unsigned int class)
>>>    }
>>>    static void
>>> -show_client_class(struct seq_file *m,
>>> +show_client_class(struct drm_printer *p,
>>>    		  struct i915_drm_client *client,
>>>    		  unsigned int class)
>>>    {
>>> @@ -117,22 +117,20 @@ show_client_class(struct seq_file *m,
>>>    	rcu_read_unlock();
>>>    	if (capacity)
>>> -		seq_printf(m, "drm-engine-%s:\t%llu ns\n",
>>> +		drm_printf(p, "drm-engine-%s:\t%llu ns\n",
>>>    			   uabi_class_names[class], total);
>>>    	if (capacity > 1)
>>> -		seq_printf(m, "drm-engine-capacity-%s:\t%u\n",
>>> +		drm_printf(p, "drm-engine-capacity-%s:\t%u\n",
>>>    			   uabi_class_names[class],
>>>    			   capacity);
>>>    }
>>> -void i915_drm_client_fdinfo(struct seq_file *m, struct file *f)
>>> +void i915_drm_client_fdinfo(struct drm_printer *p, struct drm_file *file)
>>>    {
>>> -	struct drm_file *file = f->private_data;
>>>    	struct drm_i915_file_private *file_priv = file->driver_priv;
>>>    	struct drm_i915_private *i915 = file_priv->dev_priv;
>>>    	struct i915_drm_client *client = file_priv->client;
>>> -	struct pci_dev *pdev = to_pci_dev(i915->drm.dev);
>>>    	unsigned int i;
>>>    	/*
>>> @@ -141,12 +139,6 @@ void i915_drm_client_fdinfo(struct seq_file *m, struct file *f)
>>>    	 * ******************************************************************
>>>    	 */
>>> -	seq_printf(m, "drm-driver:\t%s\n", i915->drm.driver->name);
>>> -	seq_printf(m, "drm-pdev:\t%04x:%02x:%02x.%d\n",
>>> -		   pci_domain_nr(pdev->bus), pdev->bus->number,
>>> -		   PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
>>> -	seq_printf(m, "drm-client-id:\t%u\n", client->id);
>>
>> As mentioned in my reply to the cover letter, I think the i915
>> implementation is the right one. At least the semantics of it.
>>
>> Granted it is a super set of the minimum required as documented by
>> drm-usage-stats.rst - not only 1:1 to current instances of struct file, but
>> also avoids immediate id recycling.
>>
>> Former could perhaps be achieved with a simple pointer hash, but latter
>> helps userspace detect when a client has exited and id re-allocated to a new
>> client within a single scanning period.
>>
>> Without this I don't think userspace can implement a fail safe method of
>> detecting which clients are new ones and so wouldn't be able to track
>> history correctly.
>>
>> I think we should rather extend the documented contract to include the
>> cyclical property than settle for a weaker common implementation.
> 
> atomic64_t never wraps, so you don't have any recycling issues?

Okay yes, with 64 bits there aren't any practical recycling issues.

> The other piece and imo much more important is that I really don't want
> the i915_drm_client design to spread, it conceptually makes no sense.
> drm_file is the uapi object, once that's gone userspace will never be able
> to look at anything, having a separate free-standing object that's
> essentially always dead is backwards.
> 
> I went a bit more in-depth in a different thread on scheduler fd_info
> stats, but essentially fd_info needs to pull stats, you should never push
> stats towards the drm_file (or i915_drm_client). That avoids all the
> refcounting issues and rcu needs and everything else like that.
> 
> Maybe you want to jump into that thread:
> https://lore.kernel.org/dri-devel/CAKMK7uE=m3sSTQrLCeDg0vG8viODOecUsYDK1oC++f5pQi0e8Q@mail.gmail.com/
> 
> So retiring i915_drm_client infrastructure is the right direction I think.

Hmmm.. it is a _mostly_ pull model that we have in i915 ie. data is 
pulled on fdinfo queries.

_Mostly_ because it cannot be fully pull based when you look at some 
internal flows. We have to save some data at runtime at times not driven 
by the fdinfo queries.

For instance context close needs to record the GPU utilisation against 
the client so that it is not lost. Also in the execlists backend we must 
transfer the hardware tracked runtime into the software state when hw 
contexts are switched out.

The fact i915_drm_client is detached from file_priv is a consequence of 
the fact i915 GEM contexts can outlive drm_file, and that when such 
contexts are closed, we need a to record their runtimes.

So I think there are three options: how it is now, fully krefed 
drm_file, or prohibit persistent contexts. Last one don't think we can 
do due ABI and 2nd felt heavy handed so I choose a lightweight 
i915_drm_client option.

Maybe there is a fourth option of somehow detecting during context 
destruction that drm_file is gone and skip the runtime recording, but 
avoiding races and all did not make me want to entertain it much. Is 
this actually what you are proposing?

Regards,

Tvrtko


More information about the Freedreno mailing list