[Intel-gfx] [RFC 01/12] drm/i915: Expose list of clients in sysfs
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Mon Mar 9 23:26:34 UTC 2020
On 09/03/2020 21:34, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 18:31:18)
>> +struct i915_drm_client *
>> +i915_drm_client_add(struct i915_drm_clients *clients, struct task_struct *task)
>> +{
>> + struct i915_drm_client *client;
>> + int ret;
>> +
>> + client = kzalloc(sizeof(*client), GFP_KERNEL);
>> + if (!client)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + kref_init(&client->kref);
>> + client->clients = clients;
>> +
>> + ret = mutex_lock_interruptible(&clients->lock);
>> + if (ret)
>> + goto err_id;
>> + ret = xa_alloc_cyclic(&clients->xarray, &client->id, client,
>> + xa_limit_32b, &clients->next_id, GFP_KERNEL);
>
> So what's next_id used for that explains having the over-arching mutex?
It's to give out client id's "cyclically" - before I apparently
misunderstood what xa_alloc_cyclic is supposed to do - I thought after
giving out id 1 it would give out 2 next, even if 1 was returned to the
pool in the meantime. But it doesn't, I need to track the start point
for the next search with "next".
I want this to make intel_gpu_top's life easier, so it doesn't have to
deal with id recycling for all practical purposes.
And a peek into xa implementation told me the internal lock is not
protecting "next.
I could stick with one lock and not use the internal one if I used it on
release path as well.
Regards,
Tvrtko
More information about the Intel-gfx
mailing list