[Intel-gfx] [PATCH v3 2/5] drm/i915: Use batch pools with the command parser
Volkin, Bradley D
bradley.d.volkin at intel.com
Wed Nov 5 23:42:00 CET 2014
[snip]
On Wed, Nov 05, 2014 at 01:50:24AM -0800, Daniel Vetter wrote:
> On Tue, Nov 04, 2014 at 08:35:00AM -0800, Volkin, Bradley D wrote:
> > On Tue, Nov 04, 2014 at 02:17:59AM -0800, Daniel Vetter wrote:
> > > On Mon, Nov 03, 2014 at 11:19:42AM -0800, bradley.d.volkin at intel.com wrote:
> > > > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > > > index 4dbd7b9..c59100d 100644
> > > > --- a/drivers/gpu/drm/i915/i915_gem.c
> > > > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > > > @@ -2500,6 +2500,14 @@ static void i915_gem_free_request(struct drm_i915_gem_request *request)
> > > > if (request->ctx)
> > > > i915_gem_context_unreference(request->ctx);
> > > >
> > > > + if (request->batch_obj &&
> > > > + !list_empty(&request->batch_obj->batch_pool_list)) {
> > > > + struct drm_i915_private *dev_priv = to_i915(request->ring->dev);
> > > > +
> > > > + i915_gem_batch_pool_put(&dev_priv->mm.batch_pool,
> > > > + request->batch_obj);
> > > > + }
> > >
> > > This looks a bit like reinvented active tracking. If we copy the libdrm
> > > cache design a bit more the cache would simply walk the active list when
> > > there's nothing in the inactive list and shovel all the objects with
> > > ->active.
> > >
> > > The advantage would be that we wouldn't leak special batch_pool_put code
> > > all over the place, keeping things nice&tidy.
> >
> > Chris had also suggested looking at the libdrm cache with respect to this,
> > but I was concerned about the impact of the scheduler reordering requests.
> > I'd have to go back and look at libdrm, but it sounded like it was relying
> > on the fifo nature of submissions. Could be wrong though.
>
> It just walks the list and moves all the non-active stuff over. Not
> terribly efficient in the face of a scheduler (or multiple rings even),
> but I think we could start to care about this if it turns out to be a real
> problem.
I'm testing patches with the majority of the changes you've requested, so
should have a v4 tonight/tomorrow.
For this part, I've got an implementation that works ok but one difference is
that if we stop submitting batches, and therefore stop calling batch_pool_get,
we stop moving buffers to the batch pool's inactive list. This means some buffers
don't get marked purgeable even when they are. The solution that I see is to
add a function to do the batch pool active -> inactive work and then call that
from the appropriate place(s), but that seems to defeat the purpose of the
proposed change. Suggestions?
Brad
More information about the Intel-gfx
mailing list