[RFC PATCH V3] drm/xe/guc: Use exec queue hints for GT frequency
Souza, Jose
jose.souza at intel.com
Mon Jan 20 13:55:56 UTC 2025
On Mon, 2025-01-20 at 07:00 +0000, Upadhyay, Tejas wrote:
>
> > -----Original Message-----
> > From: Souza, Jose <jose.souza at intel.com>
> > Sent: Tuesday, January 14, 2025 12:22 AM
> > To: intel-xe at lists.freedesktop.org; Upadhyay, Tejas
> > <tejas.upadhyay at intel.com>
> > Cc: Nilawar, Badal <badal.nilawar at intel.com>; De Marchi, Lucas
> > <lucas.demarchi at intel.com>; Belgaumkar, Vinay
> > <vinay.belgaumkar at intel.com>; Mrozek, Michal <michal.mrozek at intel.com>;
> > Morek, Szymon <szymon.morek at intel.com>
> > Subject: Re: [RFC PATCH V3] drm/xe/guc: Use exec queue hints for GT
> > frequency
> >
> > On Fri, 2025-01-10 at 20:11 +0530, Tejas Upadhyay wrote:
> > > Allow user to provide a low latency hint per exec queue. When set, KMD
> > > sends a hint to GuC which results in special handling for this exec
> > > queue. SLPC will ramp the GT frequency aggressively every time it
> > > switches to this exec queue.
> > >
> > > We need to enable the use of SLPC Compute strategy during init, but it
> > > will apply only to exec queues that set this bit during exec queue
> > > creation.
> > >
> > > Improvement with this approach as below:
> > >
> > > Before,
> > >
> > > :~$ NEOReadDebugKeys=1 EnableDirectSubmission=0 clpeak
> > > --kernel-latency
> > > Platform: Intel(R) OpenCL Graphics
> > > Device: Intel(R) Graphics [0xe20b]
> > > Driver version : 24.52.0 (Linux x64)
> > > Compute units : 160
> > > Clock frequency : 2850 MHz
> > > Kernel launch latency : 283.16 us
> > >
> > > After,
> > >
> > > :~$ NEOReadDebugKeys=1 EnableDirectSubmission=0 clpeak
> > > --kernel-latency
> > > Platform: Intel(R) OpenCL Graphics
> > > Device: Intel(R) Graphics [0xe20b]
> > > Driver version : 24.52.0 (Linux x64)
> > > Compute units : 160
> > > Clock frequency : 2850 MHz
> > >
> > > Kernel launch latency : 63.38 us
> > >
> > > UMD will indicate low latency hint with flag as mentioned below,
> > >
> > > * struct drm_xe_exec_queue_create exec_queue_create = {
> > > * .flags = DRM_XE_EXEC_QUEUE_LOW_LATENCY_HINT or 0
> > > * .extensions = 0,
> > > * .vm_id = vm,
> > > * .num_bb_per_exec = 1,
> > > * .num_eng_per_bb = 1,
> > > * .instances = to_user_pointer(&instance),
> > > * };
> > > * ioctl(fd, DRM_IOCTL_XE_EXEC_QUEUE_CREATE, &exec_queue_create);
> > >
> > > Link to UMD PR : https://github.com/intel/compute-runtime/pull/794
> > >
> > > Note: There is outstanding issue on guc side to be not able to switch
> > > to max frequency as per strategy indicated by KMD, so for
> > > experminet/test result hardcoding apporch was taken and passed to guc
> > > as policy. Effort on debugging from guc side is going on in parallel.
> > >
> > > V3:
> > > - Conver user flag to kernel internal flag and use (Oak)
> > > - Support query config for use to check kernel support (Jose)
> > > - Dont need to take runtime pm (Vinay)
> > > V2:
> > > - DRM_XE_EXEC_QUEUE_LOW_LATENCY_HINT 1 is already planned for
> > other hint(Szymon)
> > > - Add motivation to description (Lucas)
> > >
> > > Signed-off-by: Tejas Upadhyay <tejas.upadhyay at intel.com>
> > > ---
> > > drivers/gpu/drm/xe/abi/guc_actions_slpc_abi.h | 3 +++
> > > drivers/gpu/drm/xe/xe_exec_queue.c | 11 ++++++++---
> > > drivers/gpu/drm/xe/xe_exec_queue_types.h | 3 ++-
> > > drivers/gpu/drm/xe/xe_guc_pc.c | 16 ++++++++++++++++
> > > drivers/gpu/drm/xe/xe_guc_submit.c | 7 +++++++
> > > drivers/gpu/drm/xe/xe_query.c | 3 ++-
> > > include/uapi/drm/xe_drm.h | 6 +++++-
> > > 7 files changed, 43 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/abi/guc_actions_slpc_abi.h
> > > b/drivers/gpu/drm/xe/abi/guc_actions_slpc_abi.h
> > > index 85abe4f09ae2..c50075b8270f 100644
> > > --- a/drivers/gpu/drm/xe/abi/guc_actions_slpc_abi.h
> > > +++ b/drivers/gpu/drm/xe/abi/guc_actions_slpc_abi.h
> > > @@ -174,6 +174,9 @@ struct slpc_task_state_data {
> > > };
> > > } __packed;
> > >
> > > +#define SLPC_EXEC_QUEUE_FREQ_REQ_IS_COMPUTE REG_BIT(28)
> > > +#define SLPC_OPTIMIZED_STRATEGY_COMPUTE REG_BIT(0)
> > > +
> > > struct slpc_shared_data_header {
> > > /* Total size in bytes of this shared buffer. */
> > > u32 size;
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > index 8948f50ee58f..bd33fb318677 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue.c
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue.c
> > > @@ -544,6 +544,7 @@ int xe_exec_queue_create_ioctl(struct drm_device
> > *dev, void *data,
> > > struct drm_xe_engine_class_instance __user *user_eci =
> > > u64_to_user_ptr(args->instances);
> > > struct xe_hw_engine *hwe;
> > > + unsigned long flags;
> > > struct xe_vm *vm;
> > > struct xe_gt *gt;
> > > struct xe_tile *tile;
> > > @@ -553,7 +554,8 @@ int xe_exec_queue_create_ioctl(struct drm_device
> > *dev, void *data,
> > > u32 len;
> > > int err;
> > >
> > > - if (XE_IOCTL_DBG(xe, args->flags) ||
> > > + if (XE_IOCTL_DBG(xe, args->flags &&
> > > + !(args->flags &
> > DRM_XE_EXEC_QUEUE_LOW_LATENCY_HINT)) ||
> > > XE_IOCTL_DBG(xe, args->reserved[0] || args->reserved[1]))
> > > return -EINVAL;
> > >
> > > @@ -570,6 +572,9 @@ int xe_exec_queue_create_ioctl(struct drm_device
> > *dev, void *data,
> > > if (XE_IOCTL_DBG(xe, eci[0].gt_id >= xe->info.gt_count))
> > > return -EINVAL;
> > >
> > > + if (args->flags & DRM_XE_EXEC_QUEUE_LOW_LATENCY_HINT)
> > > + flags |= EXEC_QUEUE_FLAG_LOW_LATENCY;
> > > +
> > > if (eci[0].engine_class == DRM_XE_ENGINE_CLASS_VM_BIND) {
> > > if (XE_IOCTL_DBG(xe, args->width != 1) ||
> > > XE_IOCTL_DBG(xe, args->num_placements != 1) || @@ -
> > 578,8 +583,8
> > > @@ int xe_exec_queue_create_ioctl(struct drm_device *dev, void *data,
> > >
> > > for_each_tile(tile, xe, id) {
> > > struct xe_exec_queue *new;
> > > - u32 flags = EXEC_QUEUE_FLAG_VM;
> > >
> > > + flags |= EXEC_QUEUE_FLAG_VM;
> > > if (id)
> > > flags |=
> > EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD;
> > >
> > > @@ -626,7 +631,7 @@ int xe_exec_queue_create_ioctl(struct drm_device
> > *dev, void *data,
> > > }
> > >
> > > q = xe_exec_queue_create(xe, vm, logical_mask,
> > > - args->width, hwe, 0,
> > > + args->width, hwe, flags,
> > > args->extensions);
> > > up_read(&vm->lock);
> > > xe_vm_put(vm);
> > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > index 5af5419cec7a..30dc129a6b09 100644
> > > --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > @@ -85,7 +85,8 @@ struct xe_exec_queue {
> > > #define EXEC_QUEUE_FLAG_BIND_ENGINE_CHILD BIT(3)
> > > /* kernel exec_queue only, set priority to highest level */
> > > #define EXEC_QUEUE_FLAG_HIGH_PRIORITY BIT(4)
> > > -
> > > +/* flag to indicate low latency hint to guc */
> > > +#define EXEC_QUEUE_FLAG_LOW_LATENCY BIT(5)
> > > /**
> > > * @flags: flags for this exec queue, should statically setup aside from
> > ban
> > > * bit
> > > diff --git a/drivers/gpu/drm/xe/xe_guc_pc.c
> > > b/drivers/gpu/drm/xe/xe_guc_pc.c index df7f130fb663..4efa3485fffc
> > > 100644
> > > --- a/drivers/gpu/drm/xe/xe_guc_pc.c
> > > +++ b/drivers/gpu/drm/xe/xe_guc_pc.c
> > > @@ -992,6 +992,17 @@ static int pc_init_freqs(struct xe_guc_pc *pc)
> > > return ret;
> > > }
> > >
> > > +static int xe_guc_pc_set_strategy(struct xe_guc_pc *pc, u32 val) {
> > > + int ret = 0;
> > > +
> > > + ret = pc_action_set_param(pc,
> > > + SLPC_PARAM_STRATEGIES,
> > > + val);
> > > +
> > > + return ret;
> > > +}
> > > +
> > > /**
> > > * xe_guc_pc_start - Start GuC's Power Conservation component
> > > * @pc: Xe_GuC_PC instance
> > > @@ -1051,6 +1062,11 @@ int xe_guc_pc_start(struct xe_guc_pc *pc)
> > > }
> > >
> > > ret = pc_action_setup_gucrc(pc, GUCRC_FIRMWARE_CONTROL);
> > > + if (ret)
> > > + goto out;
> > > +
> > > + /* Enable SLPC Optimized Strategy for compute */
> > > + ret = xe_guc_pc_set_strategy(pc,
> > SLPC_OPTIMIZED_STRATEGY_COMPUTE);
> > >
> > > out:
> > > xe_force_wake_put(gt_to_fw(gt), fw_ref); diff --git
> > > a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > index 913c74d6e2ae..4c8f69427421 100644
> > > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > @@ -15,6 +15,7 @@
> > > #include <drm/drm_managed.h>
> > >
> > > #include "abi/guc_actions_abi.h"
> > > +#include "abi/guc_actions_slpc_abi.h"
> > > #include "abi/guc_klvs_abi.h"
> > > #include "regs/xe_lrc_layout.h"
> > > #include "xe_assert.h"
> > > @@ -400,6 +401,7 @@ static void
> > > __guc_exec_queue_policy_add_##func(struct exec_queue_policy *policy,
> > > MAKE_EXEC_QUEUE_POLICY_ADD(execution_quantum,
> > EXECUTION_QUANTUM)
> > > MAKE_EXEC_QUEUE_POLICY_ADD(preemption_timeout,
> > PREEMPTION_TIMEOUT)
> > > MAKE_EXEC_QUEUE_POLICY_ADD(priority, SCHEDULING_PRIORITY)
> > > +MAKE_EXEC_QUEUE_POLICY_ADD(slpc_ctx_freq_req,
> > SLPM_GT_FREQUENCY)
> > > #undef MAKE_EXEC_QUEUE_POLICY_ADD
> > >
> > > static const int xe_exec_queue_prio_to_guc[] = { @@ -414,14 +416,19
> > > @@ static void init_policies(struct xe_guc *guc, struct xe_exec_queue *q)
> > > struct exec_queue_policy policy;
> > > enum xe_exec_queue_priority prio = q->sched_props.priority;
> > > u32 timeslice_us = q->sched_props.timeslice_us;
> > > + u32 slpc_ctx_freq_req = 0;
> > > u32 preempt_timeout_us = q->sched_props.preempt_timeout_us;
> > >
> > > xe_gt_assert(guc_to_gt(guc), exec_queue_registered(q));
> > >
> > > + if (q->flags & EXEC_QUEUE_FLAG_LOW_LATENCY)
> > > + slpc_ctx_freq_req |=
> > SLPC_EXEC_QUEUE_FREQ_REQ_IS_COMPUTE;
> > > +
> > > __guc_exec_queue_policy_start_klv(&policy, q->guc->id);
> > > __guc_exec_queue_policy_add_priority(&policy,
> > xe_exec_queue_prio_to_guc[prio]);
> > > __guc_exec_queue_policy_add_execution_quantum(&policy,
> > timeslice_us);
> > > __guc_exec_queue_policy_add_preemption_timeout(&policy,
> > > preempt_timeout_us);
> > > + __guc_exec_queue_policy_add_slpc_ctx_freq_req(&policy,
> > > +slpc_ctx_freq_req);
> > >
> > > xe_guc_ct_send(&guc->ct, (u32 *)&policy.h2g,
> > > __guc_exec_queue_policy_action_size(&policy), 0, 0); diff
> > > --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
> > > index c059639613f7..aeea017198f2 100644
> > > --- a/drivers/gpu/drm/xe/xe_query.c
> > > +++ b/drivers/gpu/drm/xe/xe_query.c
> > > @@ -334,7 +334,8 @@ static int query_config(struct xe_device *xe, struct
> > drm_xe_device_query *query)
> > > xe->info.devid | (xe->info.revid << 16);
> > > if (xe_device_get_root_tile(xe)->mem.vram.usable_size)
> > > config->info[DRM_XE_QUERY_CONFIG_FLAGS] =
> > > - DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM;
> > > + DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM |
> > > +
> > DRM_XE_QUERY_CONFIG_FLAG_EQUEUE_HAS_LOW_LATENCY;
> > > config->info[DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT] =
> > > xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K ? SZ_64K :
> > SZ_4K;
> > > config->info[DRM_XE_QUERY_CONFIG_VA_BITS] = xe->info.va_bits;
> > diff
> > > --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h index
> > > f62689ca861a..fd8755f7de78 100644
> > > --- a/include/uapi/drm/xe_drm.h
> > > +++ b/include/uapi/drm/xe_drm.h
> > > @@ -393,6 +393,8 @@ struct drm_xe_query_mem_regions {
> > > *
> > > * - %DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM - Flag is set if the device
> > > * has usable VRAM
> > > + * - %DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM - Flag is set if the
> > device
> > > + * has usable VRAM
> >
> > probably the intention was not to duplicate
> > DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM comment
> >
> > > * - %DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT - Minimal memory
> > alignment
> > > * required by this device, typically SZ_4K or SZ_64K
> > > * - %DRM_XE_QUERY_CONFIG_VA_BITS - Maximum bits of a virtual
> > > address @@ -409,6 +411,7 @@ struct drm_xe_query_config {
> > > #define DRM_XE_QUERY_CONFIG_REV_AND_DEVICE_ID 0
> > > #define DRM_XE_QUERY_CONFIG_FLAGS 1
> > > #define DRM_XE_QUERY_CONFIG_FLAG_HAS_VRAM (1 << 0)
> > > + #define
> > DRM_XE_QUERY_CONFIG_FLAG_EQUEUE_HAS_LOW_LATENCY (1 << 1)
> >
> > *E*QUEUE
>
> Can you please make it more clear what this means?
In no place in uapi are shortening exec_queue by equeue, so please don't do it here.
>
> Tejas
> >
> > > #define DRM_XE_QUERY_CONFIG_MIN_ALIGNMENT 2
> > > #define DRM_XE_QUERY_CONFIG_VA_BITS 3
> > > #define DRM_XE_QUERY_CONFIG_MAX_EXEC_QUEUE_PRIORITY 4
> > > @@ -1097,6 +1100,7 @@ struct drm_xe_vm_bind {
> > > * .engine_class = DRM_XE_ENGINE_CLASS_RENDER,
> > > * };
> > > * struct drm_xe_exec_queue_create exec_queue_create = {
> > > + * .flags = DRM_XE_EXEC_QUEUE_LOW_LATENCY_HINT or 0
> > > * .extensions = 0,
> > > * .vm_id = vm,
> > > * .num_bb_per_exec = 1,
> > > @@ -1110,7 +1114,6 @@ struct drm_xe_exec_queue_create {
> > > #define DRM_XE_EXEC_QUEUE_EXTENSION_SET_PROPERTY 0
> > > #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_PRIORITY 0
> > > #define DRM_XE_EXEC_QUEUE_SET_PROPERTY_TIMESLICE 1
> > > -
> > > /** @extensions: Pointer to the first extension struct, if any */
> > > __u64 extensions;
> > >
> > > @@ -1123,6 +1126,7 @@ struct drm_xe_exec_queue_create {
> > > /** @vm_id: VM to use for this exec queue */
> > > __u32 vm_id;
> > >
> > > +#define DRM_XE_EXEC_QUEUE_LOW_LATENCY_HINT (0x1 << 1)
> >
> > follow the other bits and drop the '0x' also it needs to start on 0, so it should
> > be (1 << 0)
> >
> > > /** @flags: MBZ */
> >
> > remove the mbz
> >
> > > __u32 flags;
> > >
>
More information about the Intel-xe
mailing list