[Intel-gfx] [PATCH v3] drm/i915: Ensure associated VMAs are inactive when contexts are destroyed

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Tue Nov 17 08:54:50 PST 2015


On 17/11/15 16:39, Daniel Vetter wrote:
> On Tue, Nov 17, 2015 at 04:27:12PM +0000, Tvrtko Ursulin wrote:
>> From: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>>
>> In the following commit:
>>
>>      commit e9f24d5fb7cf3628b195b18ff3ac4e37937ceeae
>>      Author: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>>      Date:   Mon Oct 5 13:26:36 2015 +0100
>>
>>          drm/i915: Clean up associated VMAs on context destruction
>>
>> I added a WARN_ON assertion that VM's active list must be empty
>> at the time of owning context is getting freed, but that turned
>> out to be a wrong assumption.
>>
>> Due ordering of operations in i915_gem_object_retire__read, where
>> contexts are unreferenced before VMAs are moved to the inactive
>> list, the described situation can in fact happen.
>>
>> It feels wrong to do things in such order so this fix makes sure
>> a reference to context is held until the move to inactive list
>> is completed.
>>
>> v2: Rather than hold a temporary context reference move the
>>      request unreference to be the last operation. (Daniel Vetter)
>>
>> v3: Fix use after free. (Chris Wilson)
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92638
>> Cc: Michel Thierry <michel.thierry at intel.com>
>> Cc: Chris Wilson <chris at chris-wilson.co.uk>
>> Cc: Daniel Vetter <daniel.vetter at ffwll.ch>
>> ---
>>   drivers/gpu/drm/i915/i915_gem.c | 33 ++++++++++++++++++---------------
>>   1 file changed, 18 insertions(+), 15 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>> index 98c83286ab68..094ac17a712d 100644
>> --- a/drivers/gpu/drm/i915/i915_gem.c
>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>> @@ -2404,29 +2404,32 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
>>   	RQ_BUG_ON(!(obj->active & (1 << ring)));
>>
>>   	list_del_init(&obj->ring_list[ring]);
>> -	i915_gem_request_assign(&obj->last_read_req[ring], NULL);
>>
>>   	if (obj->last_write_req && obj->last_write_req->ring->id == ring)
>>   		i915_gem_object_retire__write(obj);
>>
>>   	obj->active &= ~(1 << ring);
>> -	if (obj->active)
>> -		return;
>
> 	if (obj->active) {
> 		i915_gem_request_assign(&obj->last_read_req[ring], NULL);
> 		return;
> 	}
>
> Would result in less churn in the code and drop the unecessary indent
> level. Also comment is missing as to why we need to do things in a
> specific order.

Actually I think I changed my mind and that v1 is the way to go.

Just re-ordering the code here still makes it possible for the context 
destructor to run with VMAs on the active list I think.

If we hold the context then it is 100% clear it is not possible.

Regards,

Tvrtko


More information about the Intel-gfx mailing list