[Intel-gfx] [PATCH 43/55] drm/i915: Refactor activity tracking for requests

Chris Wilson chris at chris-wilson.co.uk
Wed Jul 27 07:57:03 UTC 2016


On Wed, Jul 27, 2016 at 10:40:14AM +0300, Joonas Lahtinen wrote:
> On ma, 2016-07-25 at 18:32 +0100, Chris Wilson wrote:
> >  static void
> > -i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
> > +i915_gem_object_retire__fence(struct i915_gem_active *active,
> > +			      struct drm_i915_gem_request *req)
> >  {
> > -	GEM_BUG_ON(!__i915_gem_active_is_busy(&obj->last_write));
> > -	GEM_BUG_ON(!(obj->active &
> > -		     intel_engine_flag(i915_gem_active_get_engine(&obj->last_write,
> > -								  &obj->base.dev->struct_mutex))));
> > +}
> >  
> 
> An empty function? Could have at least a comment why currently empty.

To avoid the branch inside the retirement loop for an infrequent call.

> 
> > -	i915_gem_active_set(&obj->last_write, NULL);
> > -	intel_fb_obj_flush(obj, true, ORIGIN_CS);
> > +static void
> > +i915_gem_object_retire__write(struct i915_gem_active *active,
> > +			      struct drm_i915_gem_request *request)
> > +{
> > +	intel_fb_obj_flush(container_of(active,
> > +					struct drm_i915_gem_object,
> > +					last_write),
> 
> Add a function, manual container_of are horrible. And do it in the
> beginning of a function as a separate line, too.
> 
> > +			   true,
> > +			   ORIGIN_CS);
> >  }
> >  
> >  static void
> > -i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int idx)
> > +i915_gem_object_retire__read(struct i915_gem_active *active,
> > +			     struct drm_i915_gem_request *request)
> >  {
> > -	struct intel_engine_cs *engine;
> > +	int idx = request->engine->id;
> > +	struct drm_i915_gem_object *obj =
> > +		container_of(active, struct drm_i915_gem_object, last_read[idx]);
> 
> Ditto.

No.

> 
> >  	struct i915_vma *vma;
> >  
> > -	GEM_BUG_ON(!__i915_gem_active_is_busy(&obj->last_read[idx]));
> > -	GEM_BUG_ON(!(obj->active & (1 << idx)));
> > -
> > -	list_del_init(&obj->engine_list[idx]);
> > -	i915_gem_active_set(&obj->last_read[idx], NULL);
> > -
> > -	engine = i915_gem_active_get_engine(&obj->last_write,
> > -					    &obj->base.dev->struct_mutex);
> > -	if (engine && engine->id == idx)
> > -		i915_gem_object_retire__write(obj);
> > +	GEM_BUG_ON((obj->active & (1 << idx)) == 0);
> 
> BIT() or maybe even ENGINE_MASK() when we have such a beauty. Or do you
> intend to make this about something else but engines eventually?

Yes. You will get to those patches eventually.
> 
> >  
> >  	obj->active &= ~(1 << idx);
> >  	if (obj->active)
> > @@ -2419,15 +2384,13 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int idx)
> >  	 * so that we don't steal from recently used but inactive objects
> >  	 * (unless we are forced to ofc!)
> >  	 */
> > -	list_move_tail(&obj->global_list,
> > -		       &to_i915(obj->base.dev)->mm.bound_list);
> > +	list_move_tail(&obj->global_list, &request->i915->mm.bound_list);
> 
> As a follow-up s/global_list/global_link/?
> 
> >  void i915_gem_retire_requests(struct drm_i915_private *dev_priv)
> > @@ -2818,8 +2742,7 @@ out:
> >  }
> >  
> >  static int
> > -__i915_gem_object_sync(struct drm_i915_gem_object *obj,
> > -		       struct drm_i915_gem_request *to,
> > +__i915_gem_object_sync(struct drm_i915_gem_request *to,
> >  		       struct drm_i915_gem_request *from)
> >  {
> >  	int ret;
> > @@ -2827,9 +2750,6 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
> >  	if (to->engine == from->engine)
> >  		return 0;
> >  
> > -	if (i915_gem_request_completed(from))
> > -		return 0;
> > -
> 
> Why remove the early exit?

I made it redundant. There is also a small improvement by using the
sema-seqno tracking (which actually applies equally to the bdw
!semaphore path, more on that later).

> > @@ -172,6 +176,24 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
> >  	 */
> >  	request->ring->last_retired_head = request->postfix;
> >  
> > +	/* Walk through the active list, calling retire on each. This allows
> > +	 * objects to track their GPU activity and mark themselves as idle
> > +	 * when their *last* active request is completed (updating state
> > +	 * tracking lists for eviction, active references for GEM, etc).
> > +	 *
> > +	 * As the ->retire() may free the node, we decouple it first and
> > +	 * pass along the auxiliary information (to avoid dereferencing
> > +	 * the node after the callback).
> > +	 */
> > +	list_for_each_entry_safe(active, next, &request->active_list, link) {
> > +		prefetchw(next);
> 
> Would not this be an improvement to go to list_for_each_entry{,_safe}
> rather?

It's been tried before. It's not a universal improvement. This loop is
one of the top functions in the profiles when handling large request
objects, with perf implying that the memory loads are where our time
goes..

> > +
> > +		INIT_LIST_HEAD(&active->link);
> > +		active->__request = NULL;
> > +
> > +		active->retire(active, request);
> > +	}
> > +
> >  	i915_gem_request_remove_from_client(request);
> >  
> >  	if (request->previous_context) {
> >  
> > @@ -705,10 +723,13 @@ int i915_wait_request(struct drm_i915_gem_request *req)
> >  {
> >  	int ret;
> >  
> > -	GEM_BUG_ON(!req);
> >  	lockdep_assert_held(&req->i915->drm.struct_mutex);
> > +	GEM_BUG_ON(list_empty(&req->link));
> 
> Humm, why no waiting on requests without the tracker object? Or then
> need to use __i915_wait_request? Kerneldoc might be useful.

?

-BUG_ON(!req) we declare it as non-null and a NULL pointer dereference
oops isn't much harder than a BUG to pinpoint.

+list_empty() is an assertion against waiting on a retired request.

> >  i915_gem_active_peek(const struct i915_gem_active *active, struct mutex *mutex)
> >  {
> > -	return active->__request;
> > +	struct drm_i915_gem_request *request;
> > +
> > +	request = active->__request;
> > +	if (!request || i915_gem_request_completed(request))
> > +		return NULL;
> 
> I see early exit was kinda migrated here.

Kinda, but this is also a big difference in behaviour.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


More information about the Intel-gfx mailing list