[Intel-gfx] [PATCH 06/50] drm/i915: s/intel_ring_buffer/intel_engine
Mateo Lozano, Oscar
oscar.mateo at intel.com
Mon May 19 15:41:48 CEST 2014
> -----Original Message-----
> From: Daniel Vetter [mailto:daniel.vetter at ffwll.ch] On Behalf Of Daniel Vetter
> Sent: Monday, May 19, 2014 1:21 PM
> To: Mateo Lozano, Oscar
> Cc: Daniel Vetter; Lespiau, Damien; intel-gfx at lists.freedesktop.org
> Subject: Re: [Intel-gfx] [PATCH 06/50] drm/i915:
> s/intel_ring_buffer/intel_engine
>
> On Mon, May 19, 2014 at 10:02:07AM +0000, Mateo Lozano, Oscar wrote:
> > Hi Daniel,
> >
> > > -----Original Message-----
> > > From: Daniel Vetter [mailto:daniel.vetter at ffwll.ch] On Behalf Of
> > > Daniel Vetter
> > > Sent: Thursday, May 15, 2014 9:52 PM
> > > To: Mateo Lozano, Oscar
> > > Cc: Lespiau, Damien; Daniel Vetter; intel-gfx at lists.freedesktop.org
> > > Subject: Re: [Intel-gfx] [PATCH 06/50] drm/i915:
> > > s/intel_ring_buffer/intel_engine
> > >
> > > On Thu, May 15, 2014 at 02:17:23PM +0000, Mateo Lozano, Oscar wrote:
> > > > > -----Original Message-----
> > > > > From: Lespiau, Damien
> > > > > Sent: Wednesday, May 14, 2014 2:26 PM
> > > > > To: Daniel Vetter
> > > > > Cc: Mateo Lozano, Oscar; intel-gfx at lists.freedesktop.org
> > > > > Subject: Re: [Intel-gfx] [PATCH 06/50] drm/i915:
> > > > > s/intel_ring_buffer/intel_engine
> > > > >
> > > > > On Tue, May 13, 2014 at 03:28:27PM +0200, Daniel Vetter wrote:
> > > > > > On Fri, May 09, 2014 at 01:08:36PM +0100,
> > > > > > oscar.mateo at intel.com
> > > wrote:
> > > > > > > From: Oscar Mateo <oscar.mateo at intel.com>
> > > > > > >
> > > > > > > In the upcoming patches, we plan to break the correlation
> > > > > > > between engines (a.k.a. rings) and ringbuffers, so it makes
> > > > > > > sense to refactor the code and make the change obvious.
> > > > > > >
> > > > > > > No functional changes.
> > > > > > >
> > > > > > > Signed-off-by: Oscar Mateo <oscar.mateo at intel.com>
> > > > > >
> > > > > > If we rename stuff I'd vote for something close to Bspec
> > > > > > language, like CS. So maybe intel_cs_engine?
> > > >
> > > > Bikeshedding much, are we? :)
> > > > If we want to get closer to bspecish, intel_engine_cs would be better.
> > >
> > > I'm ok with that too ;-)
> > >
> > > > > Also, can we have such patches (and the like of "drm/i915:
> > > > > for_each_ring") pushed early when everyone is happy with them,
> > > > > they cause constant rebasing pain.
> > > >
> > > > I second that motion!
> > >
> > > Fully agreed - as soon as we have a rough sketch of where we want to
> > > go to I'll pull in the rename. Aside I highly suggest to do the
> > > rename with coccinelle and regerate it on rebases - that's much less error-
> prone than doing it by hand.
> > > -Daniel
> >
> > I propose the following code refactoring at a minimum. Even if I
> > abstract away all the "i915_gem_context.c" and "intel_ringbuffer.c"
> > functionality, and part of "i915_gem_execbuffer.c", to keep changes to
> > legacy code to a minimum, I still think the following changes are good
> > for the overall code:
> >
> > 1) s/intel_ring_buffer/intel_engine_cs
> >
> > Straight renaming: if the actual ring buffers can live either inside
> > the engine/ring (legacy ringbuffer submission) or inside the context
> > (execlists), it doesn´t make sense that the engine/ring is called
> > "intel_ring_buffer".
>
> Ack. Like I've said probably best done with coccinelle to cut down on potential
> mistakes. One thing this provokes is the obj->ring pointers we use all over the
> place. But I guess we can fix those up later on once this all has settled.
ACK, FIN
> > 2) Split the ringbuffers and the rings
> >
> > New struct:
> >
> > +struct intel_ringbuffer {
> > + struct drm_i915_gem_object *obj;
> > + void __iomem *virtual_start;
> > +
> > + u32 head;
> > + u32 tail;
> > + int space;
> > + int size;
> > + int effective_size;
> > +
> > + /** We track the position of the requests in the ring buffer, and
> > + * when each is retired we increment last_retired_head as the GPU
> > + * must have finished processing the request and so we know we
> > + * can advance the ringbuffer up to that position.
> > + *
> > + * last_retired_head is set to -1 after the value is consumed so
> > + * we can detect new retirements.
> > + */
> > + u32 last_retired_head;
> > +};
> >
> > And "struct intel_engine_cs" now groups all these elements into "buffer":
> >
> > - void __iomem *virtual_start;
> > - struct drm_i915_gem_object *obj;
> > - u32 head;
> > - u32 tail;
> > - int space;
> > - int size;
> > - int effective_size;
> > - u32 last_retired_head;
> > + struct intel_ringbuffer buffer;
>
> Is the idea to embed this new intel_ringbuffer struct into the lrc context
> structure so that we can share a bit of the low-level frobbing?
Yes, that is the idea.
> I wonder
> whether we should switch right away to a pointer and leave the engine_cs-
> >ring pointer NULL for lrc. That would be good for catching bugs where we
> accidentally mix up old and new styles. If you agree that a engine_cs->ring
> pointer would fit your design this is acked.
> Btw for such code design issues I usually refer to Rusty's api design
> manifesto:
>
> http://sweng.the-davies.net/Home/rustys-api-design-manifesto
>
> Oopsing at runtime is still considerid a bad level, but much better than silently
> failing.
>
> Again I recommend to look into coccinelle for the sed part of this.
Ok, "struct intel_ringbuffer *buffer;" it is, then (NULL when we use LRCs).
> > 3) Introduce one context backing object per engine
> >
> > - struct drm_i915_gem_object *obj;
> > + struct {
> > + struct drm_i915_gem_object *obj;
> > + } engine[I915_NUM_RINGS];
> >
> > Legacy code only ever uses engine[RCS], so I can use it everywhere in the
> existing code.
>
> Unsure about this. If I understand this correctly we only need to be able to
> support multiple backing objects for the same logical ring object (i.e.
> what is tracked by struct i915_hw_context) for the implicit per-filp context 0.
> But our handling of context id 0 is already magical, so we might as well add a
> bit more magic and shovel this array into the filp data structure:
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h
> b/drivers/gpu/drm/i915/i915_drv.h index 108e1ec2fa4b..e34db43dead3
> 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -1825,7 +1825,9 @@ struct drm_i915_file_private {
> } mm;
> struct idr context_idr;
>
> - struct i915_hw_context *private_default_ctx;
> + /* default context for each ring, NULL if hw doesn't support hw
> contexts
> + * (or fancy new lrcs) on that ring. */
> + struct i915_hw_context *private_default_ctx[I915_NUM_RINGS];
> atomic_t rps_wait_boost;
> };
>
> Of course we need to add an i915_hw_context->engine_cs pointer and we need
> to check that at execbuf to make sure we don't run contexts on the wrong
> engine.
> If we later on decide that we want to expose multiple hw contexts for !RCS to
> userspace we can easily add a bunch of ring flags to the context create ioctl. So
> this doesn't restrict us at all in the features we can support without jumping
> through hoops.
>
> Also if we'd shovel all per-ring lrcs into the same i915_hw_context structure
> then we'd need to rename that and drop the _hw part - it's no longer a 1:1
> correspondence to an actual hw ring/context/lrc/whatever wizzbang thingy.
Ok, so we create I915_NUM_RINGS contexts for the global default contexts, plus I915_NUM_RINGS contexts for every filp and 1 render context for every create ioctl.
But the magic stuff is going to pop out in many more places: I cannot idr_alloc/idr_find for the per-filp contexts, because all of them cannot have ctx->id = DEFAULT_CONTEXT_ID at the same time (I´ll have to special-case them by using dev_priv->private_default_ctx[RING] to find them). Of course, if you prefer, I can abstract away most of the functionality in i915_gem_context.c and make sure this kind magic is only done for the LRC path (similar to what you propose to do with intel_ringbuffer.c).
-- Oscar
More information about the Intel-gfx
mailing list