[Intel-gfx] [PATCH] drm/i915: Fix dbuf slice mask when turning off all the pipes

Lisovskiy, Stanislav stanislav.lisovskiy at intel.com
Sun May 17 12:12:49 UTC 2020


On Sat, May 16, 2020 at 07:15:42PM +0300, Ville Syrjala wrote:
> From: Ville Syrjälä <ville.syrjala at linux.intel.com>
> 
> The current dbuf slice computation only happens when there are
> active pipes. If we are turning off all the pipes we just leave
> the dbuf slice mask at it's previous value, which may be something
> other that BIT(S1). If runtime PM will kick in it will however
> turn off everything but S1. Then on the next atomic commit (if
> the new dbuf slice mask matches the stale value we left behind)
> the code will not turn on the other slices we now need. This will
> lead to underruns as the planes are trying to use a dbuf slice
> that's not powered up.
> 
> To work around let's just just explicitly set the dbuf slice mask
> to BIT(S1) when we are turning off all the pipes. Really the code
> should just calculate this stuff the same way regardless whether
> the pipes are on or off, but we're not quite there yet (need a
> bit more work on the dbuf state for that).
> 
> Cc: Chris Wilson <chris at chris-wilson.co.uk>
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy at intel.com>
> Fixes: 3cf43cdc63fb ("drm/i915: Introduce proper dbuf state")
> Signed-off-by: Ville Syrjälä <ville.syrjala at linux.intel.com>
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 16 ++++++++++++++++
>  1 file changed, 16 insertions(+)
> 
> diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
> index a21e36ed1a77..4a523d8b881f 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -4071,6 +4071,22 @@ skl_ddb_get_pipe_allocation_limits(struct drm_i915_private *dev_priv,
>  	*num_active = hweight8(active_pipes);
>  
>  	if (!crtc_state->hw.active) {
> +		/*
> +		 * FIXME hack to make sure we compute this sensibly when
> +		 * turning off all the pipes. Otherwise we leave it at
> +		 * whatever we had previously, and then runtime PM will
> +		 * mess it up by turning off all but S1. Remove this
> +		 * once the dbuf state computation flow becomes sane.
> +		 */
> +		if (active_pipes == 0) {
> +			new_dbuf_state->enabled_slices = BIT(DBUF_S1);
> +
> +			if (old_dbuf_state->enabled_slices != new_dbuf_state->enabled_slices) {
> +				ret = intel_atomic_serialize_global_state(&new_dbuf_state->base);
> +				if (ret)
> +					return ret;
> +			}
> +		}

Rather weird, why we didnt have that issue before..
Just trying to figure out what's the reason - aren't we recovering the last
state of enabled slices from hw in gen9_dbuf_enable?

As I understand you modify enabled_slices in dbuf global object recovering
the actual hw state there. 

Also from your patches I don't see the actual logic difference with what 
was happening before dbuf_state in that sense.
I.e we were also bailing out in skl_get_pipe_alloc_limits, without modifying
dbuf_state before, however there was no issue.

So the reason for regression should be somewhere else? Or am I missing something?

Also I guess would be really cute if we use a single way to get
slice configuration, i.e those tables from BSpec and functionality around it,
i.e we have skl_compute_dbuf_slices(crtc_state, active_pipes) call, which is supposed
to return dbuf slice config correspondent to active_pipes.

I guess by scattering those kind of assignments, here and there we just increasing 
the probability of more issues happening.

Stan


>  		alloc->start = 0;
>  		alloc->end = 0;
>  		return 0;
> -- 
> 2.26.2
> 


More information about the Intel-gfx mailing list