[Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
Chris Wilson
chris at chris-wilson.co.uk
Tue Mar 10 00:13:52 UTC 2020
Quoting Tvrtko Ursulin (2020-03-09 23:26:34)
>
> On 09/03/2020 21:34, Chris Wilson wrote:
> > Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
> >> +struct i915_drm_client *
> >> +i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
> >> +{
> >> + struct i915_drm_client *client;
> >> + int ret;
> >> +
> >> + client = kzalloc(sizeof(*client), GFP_KERNEL);
> >> + if (!client)
> >> + return ERR_PTR(-ENOMEM);
> >> +
> >> + kref_init(&client->kref);
> >> + client->clients = clients;
> >> +
> >> + ret = mutex_lock_interruptible(&clients->lock);
> >> + if (ret)
> >> + goto err_id;
> >> + ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
> >> + xa_limit_32b, &clients->next_id, GFP_KERNEL);
> >
> > So what's next_id used for that explains having the over-arching mutex?
>
> It's to give out client id's "cyclically" - before I apparently
> misunderstood what xa_alloc_cyclic is supposed to do - I thought after
> giving out id 1 it would give out 2 next, even if 1 was returned to the
> pool in the meantime. But it doesn't, I need to track the start point
> for the next search with "next".
Ok. A requirement of the API for the external counter.
> I want this to make intel_gpu_top's life easier, so it doesn't have to
> deal with id recycling for all practical purposes.
Fair enough. I only worry about the radix nodes and sparse ids :)
> And a peek into xa implementation told me the internal lock is not
> protecting "next.
See xa_alloc_cyclic(), seems to cover __xa_alloc_cycle (where *next is
manipulated) under the xa_lock.
-Chris
More information about the Intel-gfx
mailing list