[Intel-gfx] [PATCH v4 4/6] drm/i915: consolidate LRC mode HWSP setup & teardown

Chris Wilson chris at chris-wilson.co.uk
Sat Jan 30 03:11:31 PST 2016


On Fri, Jan 29, 2016 at 07:19:29PM +0000, Dave Gordon wrote:
> In legacy ringbuffer mode, the HWSP is a separate GEM object with its
> own pinning and reference counts. In LRC mode, however, it's not;
> instead its part of the default context object. The LRC-mode setup &
> teardown code therefore needs to handle this specially; the presence
> of the two bugs fixed in this patchset suggests that this code is not
> well-understood or maintained at present.
> 
> So, this patch:
>     moves the (newly-fixed!) LRC-mode HWSP teardown code to its own
>         (trivial) function lrc_teardown_hardware_status_page(), and
>     changes the call signature of lrc_setup_hardware_status_page()
>         to match
> so that all knowledge of this special arrangement is local to these
> two functions.

On the other hand, you now hide that information which makes the
relationship between engine init/fini and the default context even more
opaque.

(There is still zero reason why we use the first page of an unused context
object as the HWS page.)
 
> It will also help with efforts in progress to eliminate special
> handling of the default (kernel) context elsewhere in LRC code :)

Non sequitur.
 
> v3: Rebased
> 
> Signed-off-by: Dave Gordon <david.s.gordon at intel.com>

The patch is an improvement, so
Reviewed-by: Chris Wilson <chris at chris-wilson.co.uk>

>  drivers/gpu/drm/i915/intel_lrc.c | 41 +++++++++++++++++++++++-----------------
>  1 file changed, 24 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
> index 0fa2497..ff38e57 100644
> --- a/drivers/gpu/drm/i915/intel_lrc.c
> +++ b/drivers/gpu/drm/i915/intel_lrc.c
> @@ -227,9 +227,8 @@ enum {
>  
>  static int intel_lr_context_pin(struct intel_context *ctx,
>  				struct intel_engine_cs *engine);
> -static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring,
> -		struct drm_i915_gem_object *default_ctx_obj);
> -
> +static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring);
> +static void lrc_teardown_hardware_status_page(struct intel_engine_cs *ring);
>  
>  /**
>   * intel_sanitize_enable_execlists() - sanitize i915.enable_execlists
> @@ -1555,8 +1554,7 @@ static int gen8_init_common_ring(struct intel_engine_cs *ring)
>  	struct drm_i915_private *dev_priv = dev->dev_private;
>  	u8 next_context_status_buffer_hw;
>  
> -	lrc_setup_hardware_status_page(ring,
> -				dev_priv->kernel_context->engine[ring->id].state);
> +	lrc_setup_hardware_status_page(ring);
>  
>  	I915_WRITE_IMR(ring, ~(ring->irq_enable_mask | ring->irq_keep_mask));
>  	I915_WRITE(RING_HWSTAM(ring->mmio_base), 0xffffffff);
> @@ -2005,10 +2003,7 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *ring)
>  	i915_cmd_parser_fini_ring(ring);
>  	i915_gem_batch_pool_fini(&ring->batch_pool);
>  
> -	if (ring->status_page.obj) {
> -		kunmap(kmap_to_page(ring->status_page.page_addr));
> -		ring->status_page.obj = NULL;
> -	}
> +	lrc_teardown_hardware_status_page(ring);
>  
>  	ring->disable_lite_restore_wa = false;
>  	ring->ctx_desc_template = 0;
> @@ -2500,24 +2495,36 @@ uint32_t intel_lr_context_size(struct intel_engine_cs *ring)
>  	return ret;
>  }
>  
> -static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring,
> -		struct drm_i915_gem_object *default_ctx_obj)
> +static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring)
>  {
> -	struct drm_i915_private *dev_priv = ring->dev->dev_private;
> +	struct drm_i915_private *dev_priv = to_i915(ring->dev);
> +	struct intel_context *dctx = dev_priv->kernel_context;
> +	struct drm_i915_gem_object *dctx_obj = dctx->engine[ring->id].state;
> +	u64 dctx_addr = i915_gem_obj_ggtt_offset(dctx_obj);

This is known via the ctx->engine[id].lrc_vma. This is actually quite
important as it stresses that the caller must have acquire the context-pin
for us.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre


More information about the Intel-gfx mailing list