[Freedreno] [PATCH 10/13] drm/msm: Support multiple ringbuffers

Daniel Vetter daniel at ffwll.ch
Wed May 31 07:21:41 UTC 2017


On Tue, May 30, 2017 at 12:34:34PM -0400, Alex Deucher wrote:
> On Tue, May 30, 2017 at 12:20 PM, Jordan Crouse <jcrouse at codeaurora.org> wrote:
> > On Sun, May 28, 2017 at 09:43:35AM -0400, Rob Clark wrote:
> >> On Mon, May 8, 2017 at 4:35 PM, Jordan Crouse <jcrouse at codeaurora.org> wrote:
> >> > Add the infrastructure to support the idea of multiple ringbuffers.
> >> > Assign each ringbuffer an id and use that as an index for the various
> >> > ring specific operations.
> >> >
> >> > The biggest delta is to support legacy fences. Each fence gets its own
> >> > sequence number but the legacy functions expect to use a unique integer.
> >> > To handle this we return a unique identifer for each submission but
> >> > map it to a specific ring/sequence under the covers. Newer users use
> >> > a dma_fence pointer anyway so they don't care about the actual sequence
> >> > ID or ring.
> >>
> >> So, WAIT_FENCE is alive and well, and useful since it avoids the
> >> overhead of creating a 'struct file', but it is only used within a
> >> single pipe_context (or at least situations where we know which ctx
> >> the seqno fence applies to).  It seems like it would be simpler if we
> >> just introduced a ctx-id in all the ioctls (SUBMIT and WAIT_FENCE)
> >> that take a uint fence.  Then I think we don't need hashtable
> >> fancyness.
> >>
> >> Also, one thing I was thinking of is that some-day we might want to
> >> make SUBMIT non-blocking when there is a dependency on a fence from a
> >> different ring.  (Ie. queue it up but don't write cmds into rb yet.)
> >> Which means we'd need multiple fence timelines per priority-level rb.
> >> Which brings me back to wanting a CREATE_CTX type of ioctl.  (And I
> >> guess DESTROY_CTX.)  We could make these simple stubs for now, ie.
> >> CREATE_CTX just returns the priority level back, and not really have
> >> any separate "context" object on the kernel side for now.  This
> >> wouldn't change the implementation much from what you have, but I
> >> think that gives us some flexibility to later on actually let us have
> >> multiple contexts at a given priority level which don't block each
> >> other for submits that are still pending on some fence, without
> >> another UABI change.
> >
> > Sure. My motivation here was to mostly avoid making that decision because I know
> > from experience once we start going down that path we end up using the context
> > ID for everything and we end up re-spinning a bunch of APIs.
> >
> > But I agree that the context concept is our inevitable future - I've already
> > posted one set of patches for "draw queues" (which will soon be bravely renamed
> > as submit queues). I think thats the way we want to go because as you said,
> > there is a 100% chance we'll go for asynchronous submissions in the very near
> > future.
> >
> > That said, there is a bit of added complexity for per-queue fences - namely,
> > we need to move the per-ring fence value in the memptrs to a per-queue value.
> > This probably isn't a huge deal (an extra page of memory would give us up to
> > 1024 queues to work with globally) but I get itchy every time an arbitrary
> > limit is introduced no matter how reasonable it might be.
> >
> 
> FWIW, we have contexts in amdgpu and it makes a lot of things easier
> when dealing with dependencies.  Feel free to browse our
> implementation for ideas.

Same on i915, we use contexts (not batches) as the scheduling entity.
Think of them like threads on a cpu, at least in our case. And we can
dynamically allocate as many as we need (well until we run out of memory
of course), we can even swap them in/out :-)
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list