[PATCH v3 6/6] drm/xe/xe_query: Use separate iterator while filling GT list
Matt Roper
matthew.d.roper at intel.com
Mon Jun 30 22:14:02 UTC 2025
On Mon, Jun 30, 2025 at 03:08:45PM -0700, Cavitt, Jonathan wrote:
> -----Original Message-----
> From: Intel-xe <intel-xe-bounces at lists.freedesktop.org> On Behalf Of Matt Roper
> Sent: Monday, June 30, 2025 10:35 AM
> To: intel-xe at lists.freedesktop.org
> Cc: Roper, Matthew D <matthew.d.roper at intel.com>
> Subject: [PATCH v3 6/6] drm/xe/xe_query: Use separate iterator while filling GT list
> >
> > The 'id' value updated by for_each_gt() is the uapi GT ID of the GTs
> > being iterated over, and may skip over values if a GT is not present on
> > the device. Use a separate iterator for GT list array assignments to
> > ensure that the array will be filled properly on future platforms where
> > index in the GT query list may not match the uapi ID.
> >
> > Signed-off-by: Matt Roper <matthew.d.roper at intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_query.c | 25 +++++++++++++------------
> > 1 file changed, 13 insertions(+), 12 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
> > index e615b0916217..c3e0a22f09f0 100644
> > --- a/drivers/gpu/drm/xe/xe_query.c
> > +++ b/drivers/gpu/drm/xe/xe_query.c
> > @@ -368,6 +368,7 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
> > struct drm_xe_query_gt_list __user *query_ptr =
> > u64_to_user_ptr(query->data);
> > struct drm_xe_query_gt_list *gt_list;
> > + int iter = 0;
>
> It doesn't look like iter is being updated below. Is it expected to always be zero?
> -Jonathan Cavitt
No, looks like I forgot to 'git add' the final change of the patch; I'll
send an updated version shortly.
Matt
>
> > u8 id;
> >
> > if (query->size == 0) {
> > @@ -385,12 +386,12 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
> >
> > for_each_gt(gt, xe, id) {
> > if (xe_gt_is_media_type(gt))
> > - gt_list->gt_list[id].type = DRM_XE_QUERY_GT_TYPE_MEDIA;
> > + gt_list->gt_list[iter].type = DRM_XE_QUERY_GT_TYPE_MEDIA;
> > else
> > - gt_list->gt_list[id].type = DRM_XE_QUERY_GT_TYPE_MAIN;
> > - gt_list->gt_list[id].tile_id = gt_to_tile(gt)->id;
> > - gt_list->gt_list[id].gt_id = gt->info.id;
> > - gt_list->gt_list[id].reference_clock = gt->info.reference_clock;
> > + gt_list->gt_list[iter].type = DRM_XE_QUERY_GT_TYPE_MAIN;
> > + gt_list->gt_list[iter].tile_id = gt_to_tile(gt)->id;
> > + gt_list->gt_list[iter].gt_id = gt->info.id;
> > + gt_list->gt_list[iter].reference_clock = gt->info.reference_clock;
> > /*
> > * The mem_regions indexes in the mask below need to
> > * directly identify the struct
> > @@ -406,18 +407,18 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
> > * assumption.
> > */
> > if (!IS_DGFX(xe))
> > - gt_list->gt_list[id].near_mem_regions = 0x1;
> > + gt_list->gt_list[iter].near_mem_regions = 0x1;
> > else
> > - gt_list->gt_list[id].near_mem_regions =
> > + gt_list->gt_list[iter].near_mem_regions =
> > BIT(gt_to_tile(gt)->id) << 1;
> > - gt_list->gt_list[id].far_mem_regions = xe->info.mem_region_mask ^
> > - gt_list->gt_list[id].near_mem_regions;
> > + gt_list->gt_list[iter].far_mem_regions = xe->info.mem_region_mask ^
> > + gt_list->gt_list[iter].near_mem_regions;
> >
> > - gt_list->gt_list[id].ip_ver_major =
> > + gt_list->gt_list[iter].ip_ver_major =
> > REG_FIELD_GET(GMD_ID_ARCH_MASK, gt->info.gmdid);
> > - gt_list->gt_list[id].ip_ver_minor =
> > + gt_list->gt_list[iter].ip_ver_minor =
> > REG_FIELD_GET(GMD_ID_RELEASE_MASK, gt->info.gmdid);
> > - gt_list->gt_list[id].ip_ver_rev =
> > + gt_list->gt_list[iter].ip_ver_rev =
> > REG_FIELD_GET(GMD_ID_REVID, gt->info.gmdid);
> > }
> >
> > --
> > 2.49.0
> >
> >
--
Matt Roper
Graphics Software Engineer
Linux GPU Platform Enablement
Intel Corporation
More information about the Intel-xe
mailing list