[Intel-gfx] [PATCH] drm/i915: Infrastructure for supporting different GGTT views per object

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Mon Dec 1 09:50:24 PST 2014


On 12/01/2014 05:19 PM, Daniel Vetter wrote:
> On Mon, Dec 01, 2014 at 04:34:16PM +0000, Tvrtko Ursulin wrote:
>>
>> On 12/01/2014 04:07 PM, Daniel Vetter wrote:
>>> On Mon, Dec 01, 2014 at 11:32:42AM +0000, Tvrtko Ursulin wrote:
>>>> On 11/28/2014 05:31 PM, Daniel Vetter wrote:
>>>>> On Thu, Nov 27, 2014 at 02:52:44PM +0000, Tvrtko Ursulin wrote:
>>>>>> From: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>>>>>> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
>>>>>> index 86cf428..6213c07 100644
>>>>>> --- a/drivers/gpu/drm/i915/i915_gem.c
>>>>>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>>>>>> @@ -2090,8 +2090,7 @@ i915_gem_shrink(struct drm_i915_private *dev_priv,
>>>>>>   			/* For the unbound phase, this should be a no-op! */
>>>>>>   			list_for_each_entry_safe(vma, v,
>>>>>>   						 &obj->vma_list, vma_link)
>>>>>> -				if (i915_vma_unbind(vma))
>>>>>> -					break;
>>>>>> +				i915_vma_unbind(vma);
>>>>>
>>>>> Why drop the early break if a vma_unbind fails? Looks like a superflous
>>>>> hunk to me.
>>>>
>>>> I wasn't sure about this. (Does it makes sense to try and unbind other VMAs
>>>> if one couldn't be unbound?)
>>>>
>>>> In fact, looking at it now, I am not sure about the unbind flow
>>>> (i915_vma_unbind). Won't i915_gem_object_retire move all VMAs to inactive
>>>> list on first VMA unbind? Retire only on last VMA going away?
>>>
>>> Yeah only the first vma_unbind might fail with the current code. The
>>> problem though is that you ignore all failures.
>>
>> I am not sure what you mean. Why only the first unbind can fail?
>>
>> The part I was unsure about was this break removal in the shrinker. Whether
>> or not it makes sense to go through all VMAs regardless if one failed to
>> unbind? Is there any space to be gained by doing that?
>>
>> Alternatively, I also looked at it as: If it doesn't make sense to go
>> through all of then, then what to do if the first unbind succeeds and some
>> other fails?  End results sounds the same as trying to unbind as much as
>> possible. So I opted for doing that.
>>
>> My second concern is that object retire on 1st VMA unbind. Should that only
>> be done when the last VMA is going away?
>>
>> As it stands (in my v2 patch) it can move all VMAs onto the inactive list
>> when first one is unbound which looks wrong.
>
> Well I've started this discussion by simply asking why we need this. I
> think both versions are correct.

OK I'll try to reverse engineer more before v3 to try and establish the 
answers to the retire question.

>>>>>> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
>>>>>> index 89a2f3d..77f1bdc 100644
>>>>>> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
>>>>>> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
>>>>>> @@ -717,10 +717,8 @@ static u32 capture_pinned_bo(struct drm_i915_error_buffer *err,
>>>>>>   			break;
>>>>>>
>>>>>>   		list_for_each_entry(vma, &obj->vma_list, vma_link)
>>>>>> -			if (vma->vm == vm && vma->pin_count > 0) {
>>>>>> +			if (vma->vm == vm && vma->pin_count > 0)
>>>>>>   				capture_bo(err++, vma);
>>>>>> -				break;
>>>>>
>>>>> Not fully sure about this one, but can't hurt I guess.
>>>>
>>>> Not sure if it is useful at the moment or at all?
>>>
>>> Probably not useful right now. Otoh if we ever wire up the display fault
>>> registers on modern platforms this migh become useful to cross-check that
>>> the current display plane register settings match up with the
>>> corresponding buffer. Won't hurt either though.
>>>
>>> If you feel like make it a separate patch perhaps.
>>
>> I don't know, it sounds like an overkill to do that for this short hunk so I
>> prefer to leave it in if you don't mind.
>
> Unfortunately "let's just leave this slightly unrelated hunk in the patch
> because too much work to split it out" has bitten me countless times in
> gem. So if it's really just cleanup (we seem to agree that both old&new
> work) but not cleanup enough to justify it's own patch then I'd like to
> drop it. Not least because churn for churn's sake just makes everyone's
> live more painful (especially backporters).

Overkill wasn't the correct word choice - I did not mean it is too much 
work to split it out if you imply laziness. I really saw it as more than 
just cleanup.  And when you said "if you feel like it" I didn't bother 
explaining in detail why I think it makes sense for it to stay together.

It will "work" with or without, correct. Just won't capture more than 
one VMA in the error state, after the patch adding support for multiple 
VMAs. So it kind of does and doesn't work, depends how you look at it.

I did not think that to be a valuable split for the sake of a trivial 
hunk like the one above. But OK, I can split it out no big deal.

Regards,

Tvrtko



More information about the Intel-gfx mailing list