[Intel-gfx] [RFC 1/4] drm/i915: Implement a framework for batch buffer pools
Volkin, Bradley D
bradley.d.volkin at intel.com
Fri Jun 20 18:06:06 CEST 2014
On Fri, Jun 20, 2014 at 08:41:08AM -0700, Tvrtko Ursulin wrote:
>
> On 06/20/2014 04:30 PM, Volkin, Bradley D wrote:
> > On Fri, Jun 20, 2014 at 06:25:56AM -0700, Tvrtko Ursulin wrote:
> >>
> >> On 06/19/2014 06:35 PM, Volkin, Bradley D wrote:
> >>> On Thu, Jun 19, 2014 at 02:48:29AM -0700, Tvrtko Ursulin wrote:
> >>>>
> >>>> Hi Brad,
> >>>>
> >>>> On 06/18/2014 05:36 PM, bradley.d.volkin at intel.com wrote:
> >> Cap or no cap (I am for no cap), but the pool is still "grow only" at
> >> the moment, no? So one allocation storm and objects on the pool inactive
> >> list end up wasting memory forever.
> >
> > Oh, so what happens is that when you put() an object back in the pool, we
> > set obj->madv = I915_MADV_DONTNEED, which should tell the shrinker that it
> > can drop the backing storage for the object if we need space. When you get()
> > an object, we set obj->madv = I915_MADV_WILLNEED and get new backing pages.
> > So the number of objects grows (capped or not), but the memory used can be
> > controlled.
>
> Education time for me I see. :)
>
> So the object is in pool->inactive_list _and_ in some other list so
> shrinker can find it?
Yes. In fact, they are on several other lists. Here's my understanding:
The dev_priv->mm struct has bound_list and unbound_list, which track which
objects are or are not bound in some gtt. The shrinker operates on these
lists to drop backing storage when we need physical space. The pool objects
are added to these lists when we explicitly call ggtt_pin() and when the
object eventually gets an unbind().
The ring structs have an active_list, which track objects that are used by
work still in progress on that ring. The i915_address_space structs have an
inactive_list, which contains vmas that are bound in that address space but
are not used by work still in progress. The driver uses these to evict
objects in a given gtt/ppgtt when we need GPU virtual address space. We
explicitly put a pool object's vma on the active_list with a move_to_active()
call at the end of do_execbuffer(). Retiring requests moves it to the
appropriate i915_address_space inactive_list.
Brad
>
> Thanks,
>
> Tvrtko
More information about the Intel-gfx
mailing list