[Intel-gfx] [PATCH 09/10] drm/i915/guc: Preemption! With GuC.
Michał Winiarski
michal.winiarski at intel.com
Thu Oct 5 15:42:35 UTC 2017
On Thu, Oct 05, 2017 at 03:00:31PM +0000, Michal Wajdeczko wrote:
> On Thu, 05 Oct 2017 11:20:04 +0200, Michał Winiarski
> <michal.winiarski at intel.com> wrote:
>
> > Pretty similar to what we have on execlists.
> > We're reusing most of the GEM code, however, due to GuC quirks we need a
> > couple of extra bits.
> > Preemption is implemented as GuC action, and actions can be pretty slow.
> > Because of that, we're using a mutex to serialize them. Since we're
> > requesting preemption from the tasklet, the task of creating a workitem
> > and wrapping it in GuC action is delegated to a worker.
> >
> > To distinguish that preemption has finished, we're using additional
> > piece of HWSP, and since we're not getting context switch interrupts,
> > we're also adding a user interrupt.
> >
> > The fact that our special preempt context has completed unfortunately
> > doesn't mean that we're ready to submit new work. We also need to wait
> > for GuC to finish its own processing.
> >
> > Cc: Chris Wilson <chris at chris-wilson.co.uk>
> > Cc: Jeff Mcgee <jeff.mcgee at intel.com>
> > Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> > Cc: Oscar Mateo <oscar.mateo at intel.com>
> > Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> > Signed-off-by: Michał Winiarski <michal.winiarski at intel.com>
> > ---
>
> <snip>
>
> > diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c
> > b/drivers/gpu/drm/i915/i915_guc_submission.c
> > index 0f36bba9fc9e..21412ea3dc0e 100644
> > --- a/drivers/gpu/drm/i915/i915_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/i915_guc_submission.c
> > @@ -498,6 +498,88 @@ static void guc_add_request(struct intel_guc *guc,
> > spin_unlock(&client->wq_lock);
> > }
> > +#define GUC_PREEMPT_FINISHED 0x1
> > +#define GUC_PREEMPT_BREADCRUMB_DWORDS 0x8
> > +static void inject_preempt_context(struct work_struct *work)
> > +{
> > + struct intel_engine_cs *engine =
> > + container_of(work, typeof(*engine), guc_preempt_work);
> > + struct drm_i915_private *dev_priv = engine->i915;
> > + struct intel_guc *guc = &engine->i915->guc;
>
> guc = &dev_priv->guc;
>
> > + struct i915_guc_client *client = guc->preempt_client;
> > + struct intel_context *ce = &client->owner->engine[engine->id];
> > + u32 ctx_desc = lower_32_bits(intel_lr_context_descriptor(client->owner,
> > + engine));
> > + u32 *cs = ce->ring->vaddr + ce->ring->tail;
> > + u32 data[7];
> > +
> > + if (engine->id == RCS) {
> > + cs = gen8_emit_ggtt_write_render(
> > + intel_hws_preempt_done_address(engine),
> > + GUC_PREEMPT_FINISHED, cs);
> > + *cs++ = MI_USER_INTERRUPT;
> > + *cs++ = MI_NOOP;
> > + } else {
> > + cs = gen8_emit_ggtt_write(
> > + intel_hws_preempt_done_address(engine),
> > + GUC_PREEMPT_FINISHED, cs);
> > + *cs++ = MI_USER_INTERRUPT;
> > + *cs++ = MI_NOOP;
> > + *cs++ = MI_NOOP;
> > + *cs++ = MI_NOOP;
> > + }
>
> Maybe like this:
>
> cs = gen8_emit_ggtt_write_render(
> intel_hws_preempt_done_address(engine),
> GUC_PREEMPT_FINISHED, cs);
> *cs++ = MI_USER_INTERRUPT;
> *cs++ = MI_NOOP;
> if (engine->id != RCS) {
> *cs++ = MI_NOOP;
> *cs++ = MI_NOOP;
> }
Did you mean:
if (engine->id == RCS) {
cs = gen8_emit_ggtt_write_render()
} else {
cs = gen8_emit_ggtt_write()
*cs++ = MI_NOOP;
*cs++ = MI_NOOP;
}
*cs++ = MI_USER_INTERRUPT
*cs++ = MI_NOOP
?
[SNIP]
> > diff --git a/drivers/gpu/drm/i915/intel_uc.h
> > b/drivers/gpu/drm/i915/intel_uc.h
> > index c6c6f8513bbf..8e2818e5741e 100644
> > --- a/drivers/gpu/drm/i915/intel_uc.h
> > +++ b/drivers/gpu/drm/i915/intel_uc.h
> > @@ -124,6 +124,9 @@ struct intel_guc {
> > /* Kernel context state ggtt offset, first page is shared with GuC */
> > u32 shared_data_offset;
> > + void *shared_data_addr;
> > +
> > + struct workqueue_struct *preempt_wq;
>
> Please pick better place (see my comment in 1/10)
I'll go with a helper for shared_data_* instead.
-Michał
>
> > /* GuC's FW specific send function */
> > int (*send)(struct intel_guc *guc, const u32 *data, u32 len);
More information about the Intel-gfx
mailing list