[Intel-gfx] [PATCH v3 2/9] drm/i915/skl+: use linetime latency instead of ddb size

Paulo Zanoni paulo.r.zanoni at intel.com
Mon Sep 19 18:19:12 UTC 2016


Em Sex, 2016-09-09 às 13:30 +0530, Kumar, Mahesh escreveu:
> From: Mahesh Kumar <mahesh1.kumar at intel.com>
> 
> This patch make changes to use linetime latency instead of allocated
> DDB size during plane watermark calculation in switch case, This is
> required to implement new DDB allocation algorithm.
> 
> In New Algorithm DDB is allocated based on WM values, because of
> which
> number of DDB blocks will not be available during WM calculation,
> So this "linetime latency" is suggested by SV/HW team to use during
> switch-case for WM blocks selection.

Why is this not part of BSpec? If there's some problem with the current
algorithm and we need a new one, why is it not part of our spec?

> 
> Changes since v1:
>  - Rebase on top of Paulo's patch series
> 
> Signed-off-by: Mahesh Kumar <mahesh1.kumar at intel.com>
> ---
>  drivers/gpu/drm/i915/intel_pm.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/i915/intel_pm.c
> b/drivers/gpu/drm/i915/intel_pm.c
> index 3fdec4d..cfd9b7d1 100644
> --- a/drivers/gpu/drm/i915/intel_pm.c
> +++ b/drivers/gpu/drm/i915/intel_pm.c
> @@ -3622,10 +3622,15 @@ static int skl_compute_plane_wm(const struct
> drm_i915_private *dev_priv,
>  	    fb->modifier[0] == I915_FORMAT_MOD_Yf_TILED) {
>  		selected_result = max(method2, y_tile_minimum);
>  	} else {
> +		uint32_t linetime_us = 0;
> +
> +		linetime_us = DIV_ROUND_UP(width * 1000,
> +				skl_pipe_pixel_rate(cstate));
> +
>  		if ((cpp * cstate->base.adjusted_mode.crtc_htotal /
> 512 < 1) &&
>  		    (plane_bytes_per_line / 512 < 1))
>  			selected_result = method2;
> -		else if ((ddb_allocation / plane_blocks_per_line) >=
> 1)
> +		if (latency >= linetime_us)
>  			selected_result = min(method1, method2);
>  		else
>  			selected_result = method1;


More information about the Intel-gfx mailing list