[Intel-gfx] [PATCH 17/41] drm/i915: Pass around sg_table to get_pages/put_pages backend

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Mon Oct 17 13:51:01 UTC 2016


On 17/10/2016 12:31, Chris Wilson wrote:
> On Mon, Oct 17, 2016 at 11:55:54AM +0100, Tvrtko Ursulin wrote:
>> On 14/10/2016 13:18, Chris Wilson wrote:
>>>   static void
>>> -i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj)
>>> +i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj,
>>> +			      struct sg_table *pages)
>>>   {
>>>   	struct sgt_iter sgt_iter;
>>>   	struct page *page;
>>> -	int ret;
>>> -	GEM_BUG_ON(obj->mm.madv == __I915_MADV_PURGED);
>>> -
>>> -	ret = i915_gem_object_set_to_cpu_domain(obj, true);
>>> -	if (WARN_ON(ret)) {
>>> -		/* In the event of a disaster, abandon all caches and
>>> -		 * hope for the best.
>>> -		 */
>>> -		i915_gem_clflush_object(obj, true);
>>> -		obj->base.read_domains = obj->base.write_domain = I915_GEM_DOMAIN_CPU;
>>> -	}
>>> +	__i915_gem_object_release_shmem(obj);
>> Waiting for the object to become inactive is now gone. It did not
>> spot that that changed with this patch. Did it?
> There's no wait here. We have BUG_ONs today (that remain in place)
> to catch trying to free pages that are in use by the GPU. This is just a
> change of domains (and I wanted to keep the set-to-cpu-domain asserts,
> and avoid the other side effects).

Oh right, makes sense.

>   
>>>   int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
>>>   {
>>> -	struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
>>> -	const struct drm_i915_gem_object_ops *ops = obj->ops;
>>> -	int ret;
>>> +	int err;
>>>   	lockdep_assert_held(&obj->base.dev->struct_mutex);
>>>   	if (obj->mm.pages)
>>>   		return 0;
>>> -	if (obj->mm.madv != I915_MADV_WILLNEED) {
>>> -		DRM_DEBUG("Attempting to obtain a purgeable object\n");
>>> -		__i915_gem_object_unpin_pages(obj);
>>> -		return -EFAULT;
>>> -	}
>>> -
>>> -	ret = ops->get_pages(obj);
>>> -	if (ret) {
>>> +	err = ____i915_gem_object_get_pages(obj);
>>> +	if (err)
>>>   		__i915_gem_object_unpin_pages(obj);
>>> -		return ret;
>>> -	}
>>> -	list_add_tail(&obj->global_list, &dev_priv->mm.unbound_list);
>> Objects will no longer appear on the unbound list when they have
>> backing store?
> They still do. We don't hold the struct_mutex here, so we defer the
> global list tracking to the domain management which is slightly later.

Later as when VMA gets pinned?...

> However, there is still a window of opportunity where we have pages
> pinned for our use but not yet visible to the shrinker. A bit of
> rejigging should be mean we only need to move the unbound upon
> pages_pin_count==0 and it will require a spinlock, but that make
> actually work out to be less code. But it won't yet reduce the
> struct_mutex hold time in the shrinker, so I'm not seeing the upside
> yet.

... in which case userspace just creating objects and not touching them 
avoids putting them on bound/unbound lists?

>>> +static bool unsafe_drop_pages(struct drm_i915_gem_object *obj)
>>> +{
>>> +	if (i915_gem_object_unbind(obj) == 0)
>>> +		__i915_gem_object_put_pages(obj);
>>> +	return !obj->mm.pages;
>>> +}
>> Can we have some comments with this function?
> It may or may not result in the pages being freed, and it may or may
> result in the object being unreferenced. There used to be a function
> called drop_pages (which the same caveats) that got misused so I removed
> it and wanted to be sure that they didn't repeat their mistakes.

Hm OK, not that important in this case.

Regards,

Tvrtko



More information about the Intel-gfx mailing list