[Intel-gfx] [PATCH v4 2/3] drm/i915: refactor duplicate object vmap functions (reworked)

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Tue Feb 23 10:31:08 UTC 2016


On 23/02/16 10:06, Chris Wilson wrote:
> On Mon, Feb 22, 2016 at 04:06:39PM +0000, Tvrtko Ursulin wrote:
>>
>> [Cc Chris as the author of the idea.]
>>
>> Hi,
>>
>> On 22/02/16 15:18, Dave Gordon wrote:
>>> This is essentially Chris Wilson's patch of a similar name, reworked on
>>> top of Alex Dai's recent patch:
>>>    drm/i915: Add i915_gem_object_vmap to map GEM object to virtual space
>>> Chris' original commentary said:
>>>
>>>    We now have two implementations for vmapping a whole object, one for
>>>    dma-buf and one for the ringbuffer. If we couple the vmapping into
>>>    the obj->pages lifetime, then we can reuse an obj->vmapping for both
>>>    and at the same time couple it into the shrinker.
>>
>> As a general concept my worry is that by implementing this approach
>> we tie two unrelated concepts together.
>>
>> Shrinker is about backing storage (used/free pages in a machine),
>> while vmap is about kernel address space. And then on 32-bit with
>> its limited vmap space (128MiB, right?), it can become exhausted
>> much sooner that the shrinker would be triggered. And we would rely
>> on the shrinker running to free up address space as well now, right?
>
> Yes, we use the shrinker to free address space.
>
>> So unless I am missing something that doesn't fit well.
>
> The opposite. Even more reason for the shrinker to be able to recover
> vmap space on 32bit systems (for external users, not just ourselves).

How? I don't see that failed vmapping will trigger shrinking. What will 
prevent i915 from accumulating objects with vmaps sticking around for 
too long potentially?

>>> diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
>>> index 434a452..47f186e 100644
>>> --- a/drivers/gpu/drm/i915/intel_ringbuffer.c
>>> +++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
>>> @@ -2056,7 +2056,7 @@ static int init_phys_status_page(struct intel_engine_cs *ring)
>>>   void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
>>>   {
>>>   	if (HAS_LLC(ringbuf->obj->base.dev) && !ringbuf->obj->stolen)
>>> -		vunmap(ringbuf->virtual_start);
>>> +		i915_gem_object_unpin_vmap(ringbuf->obj);
>>>   	else
>>>   		iounmap(ringbuf->virtual_start);
>>>   	ringbuf->virtual_start = NULL;
>>> @@ -2077,16 +2077,14 @@ int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
>>>   			return ret;
>>>
>>>   		ret = i915_gem_object_set_to_cpu_domain(obj, true);
>>> -		if (ret) {
>>> -			i915_gem_object_ggtt_unpin(obj);
>>> -			return ret;
>>> -		}
>>> +		if (ret)
>>> +			goto unpin;
>>>
>>> -		ringbuf->virtual_start = i915_gem_object_vmap_range(obj, 0,
>>> -						ringbuf->size >> PAGE_SHIFT);
>>> -		if (ringbuf->virtual_start == NULL) {
>>> -			i915_gem_object_ggtt_unpin(obj);
>>> -			return -ENOMEM;
>>> +		ringbuf->virtual_start = i915_gem_object_pin_vmap(obj);
>>> +		if (IS_ERR(ringbuf->virtual_start)) {
>>> +			ret = PTR_ERR(ringbuf->virtual_start);
>>> +			ringbuf->virtual_start = NULL;
>>> +			goto unpin;
>>>   		}
>>>   	} else {
>>>   		ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, PIN_MAPPABLE);
>>> @@ -2094,10 +2092,8 @@ int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
>>>   			return ret;
>>>
>>>   		ret = i915_gem_object_set_to_gtt_domain(obj, true);
>>> -		if (ret) {
>>> -			i915_gem_object_ggtt_unpin(obj);
>>> -			return ret;
>>> -		}
>>> +		if (ret)
>>> +			goto unpin;
>>>
>>>   		/* Access through the GTT requires the device to be awake. */
>>>   		assert_rpm_wakelock_held(dev_priv);
>>> @@ -2105,14 +2101,18 @@ int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
>>>   		ringbuf->virtual_start = ioremap_wc(dev_priv->gtt.mappable_base +
>>>   						    i915_gem_obj_ggtt_offset(obj), ringbuf->size);
>>>   		if (ringbuf->virtual_start == NULL) {
>>> -			i915_gem_object_ggtt_unpin(obj);
>>> -			return -EINVAL;
>>> +			ret = -ENOMEM;
>>> +			goto unpin;
>>
>> Another refactoring not really belonging to this patch. I am not
>> sure it is that good to share the cleanup path from the two
>> logically split branches. May be fragile in the future. But it is
>> short enough so OK.
>>
>> But as a related question, I wonder why doesn't the LRC require the
>> same !HAS_LLC approach when mapping as ring buffer does here?
>
> We don't try to use stolen for LRC. The main difficulty lies in
> deciding how to map the state object, stolen forces us to use an
> ioremapping through the GTT and so only suitable for write-only
> mappings. However, we could be using the per-context HWS, for which we
> want a CPU accessible, direct pointer.

I wasn't asking about stolen but the !HAS_LLC path. Even non-stolen ring 
buffers will be mapped vie the aperture on !HAS_LLC platforms. That 
implies it is about cache coherency and we don't have the same treatment 
for the LRC page.

Until your vma->iomap prototype which added the same uncached access to 
the LRC as well.

So my question was, do we need this for cache considerations today, 
irrespective of caching the pointer in the VMA.

Regards,

Tvrtko


More information about the Intel-gfx mailing list