[Intel-gfx] [PATCH 01/22] drm/i915: Combine loops within i915_gem_evict_something

Chris Wilson chris at chris-wilson.co.uk
Fri Jul 29 06:31:43 UTC 2016


On Fri, Jul 29, 2016 at 09:17:00AM +0300, Joonas Lahtinen wrote:
> On ke, 2016-07-27 at 12:14 +0100, Chris Wilson wrote:
> > --- a/drivers/gpu/drm/i915/i915_gem_evict.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_evict.c
> > @@ -34,6 +34,19 @@
> >  #include "i915_trace.h"
> >  
> >  static bool
> > +gpu_is_idle(struct drm_i915_private *dev_priv)
> > +{
> > +	struct intel_engine_cs *engine;
> > +
> > +	for_each_engine(engine, dev_priv) {
> > +		if (!list_empty(&engine->request_list))
> > +			return false;
> > +	}
> 
> Braces are not necessary here.
>  
> >  	/*
> >  	 * The goal is to evict objects and amalgamate space in LRU order.
> >  	 * The oldest idle objects reside on the inactive list, which is in
> > -	 * retirement order. The next objects to retire are those on the (per
> > -	 * ring) active list that do not have an outstanding flush. Once the
> > -	 * hardware reports completion (the seqno is updated after the
> > -	 * batchbuffer has been finished) the clean buffer objects would
> > -	 * be retired to the inactive list. Any dirty objects would be added
> > -	 * to the tail of the flushing list. So after processing the clean
> > -	 * active objects we need to emit a MI_FLUSH to retire the flushing
> > -	 * list, hence the retirement order of the flushing list is in
> > -	 * advance of the dirty objects on the active lists.
> > +	 * retirement order. The next objects to retire are those in flight,
> > +	 * on the active list, again in retirement order.
> >  	 *
> >  	 * The retirement sequence is thus:
> >  	 *   1. Inactive objects (already retired)
> > -	 *   2. Clean active objects
> > -	 *   3. Flushing list
> > -	 *   4. Dirty active objects.
> > +	 *   2. Active objects (will stall on unbinding)
> 
> Not quite sure how good a sequence list is for two phases :)
>  
> >  found:
> >  	/* drm_mm doesn't allow any other other operations while
> > -	 * scanning, therefore store to be evicted objects on a
> > -	 * temporary list. */
> > -	INIT_LIST_HEAD(&eviction_list);
> > -	while (!list_empty(&unwind_list)) {
> > -		vma = list_first_entry(&unwind_list,
> > -				       struct i915_vma,
> > -				       exec_list);
> > -		if (drm_mm_scan_remove_block(&vma->node)) {
> > +	 * scanning, therefore store to-be-evicted objects on a
> > +	 * temporary list and take a reference for all before
> > +	 * calling unbind (which may remove the active reference
> > +	 * of any of our objects, thus corrupting the list).
> > +	 */
> > +	list_for_each_entry_safe(vma, next, &eviction_list, exec_list) {
> 
> s/exec_list/exec_link/ at some point in future.

Look ahead, it becomes evict_link.

> > +		if (drm_mm_scan_remove_block(&vma->node))
> >  			vma->pin_count++;
> > -			list_move(&vma->exec_list, &eviction_list);
> > -			continue;
> > -		}
> > -		list_del_init(&vma->exec_list);
> > +		else
> > +			list_del_init(&vma->exec_list);
> 
> Current behaviour is not changed, but gotta ask why no putting back to
> to the list vma originated from?

It's not moved from the vma lists. exec_list is a slot reserved for use
in two particular non-current temporary lists (exec and evict). In the
nearish future, I propose we stop using exec_list as the unique
identifier for an execobject and have separate exec_link/evict_link so
we can keep the lists concurrently.

Trying to avoid allocating more temporary storage inside execbuf is a
pain. But using vma as temporary storage for execbuf has to die because
of the need to allow concurrency.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


More information about the Intel-gfx mailing list