[Intel-gfx] [PATCH v11 2/7] drm/i915/guc: Add CT size delay helper

Nirmoy Das nirmoy.das at intel.com
Wed Oct 11 09:00:51 UTC 2023


On 10/11/2023 2:02 AM, Jonathan Cavitt wrote:
> Add a helper function to the GuC CT buffer that reports the expected
> time to process all outstanding requests.  As of now, there is no
> functionality to check number of requests in the buffer, so the helper
> function just reports 2 seconds, or 1ms per request up to the maximum
> number of requests the CT buffer can store.
>
> Suggested-by: John Harrison <john.c.harrison at intel.com>
> Signed-off-by: Jonathan Cavitt <jonathan.cavitt at intel.com>
> Reviewed-by: Andi Shyti <andi.shyti at linux.intel.com>

Reviewed-by: Nirmoy Das <nirmoy.das at intel.com>

> ---
>   drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c | 27 +++++++++++++++++++++++
>   drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h |  2 ++
>   2 files changed, 29 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> index c33210ead1ef7..03b616ba4ebb7 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.c
> @@ -103,6 +103,33 @@ enum { CTB_SEND = 0, CTB_RECV = 1 };
>   
>   enum { CTB_OWNER_HOST = 0 };
>   
> +/*
> + * Some H2G commands involve a synchronous response that the driver needs
> + * to wait for. In such cases, a timeout is required to prevent the driver
> + * from waiting forever in the case of an error (either no error response
> + * is defined in the protocol or something has died and requires a reset).
> + * The specific command may be defined as having a time bound response but
> + * the CT is a queue and that time guarantee only starts from the point
> + * when the command reaches the head of the queue and is processed by GuC.
> + *
> + * Ideally there would be a helper to report the progress of a given
> + * command through the CT. However, that would require a significant
> + * amount of work in the CT layer. In the meantime, provide a reasonable
> + * estimation of the worst case latency it should take for the entire
> + * queue to drain. And therefore, how long a caller should wait before
> + * giving up on their request. The current estimate is based on empirical
> + * measurement of a test that fills the buffer with context creation and
> + * destruction requests as they seem to be the slowest operation.
> + */
> +long intel_guc_ct_max_queue_time_jiffies(void)
> +{
> +	/*
> +	 * A 4KB buffer full of context destroy commands takes a little
> +	 * over a second to process so bump that to 2s to be super safe.
> +	 */
> +	return (CTB_H2G_BUFFER_SIZE * HZ) / SZ_2K;
> +}
> +
>   static void ct_receive_tasklet_func(struct tasklet_struct *t);
>   static void ct_incoming_request_worker_func(struct work_struct *w);
>   
> diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
> index 58e42901ff498..2c4bb9a941be6 100644
> --- a/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
> +++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_ct.h
> @@ -104,6 +104,8 @@ struct intel_guc_ct {
>   #endif
>   };
>   
> +long intel_guc_ct_max_queue_time_jiffies(void);
> +
>   void intel_guc_ct_init_early(struct intel_guc_ct *ct);
>   int intel_guc_ct_init(struct intel_guc_ct *ct);
>   void intel_guc_ct_fini(struct intel_guc_ct *ct);


More information about the Intel-gfx mailing list