[PATCH v3.1 6/6] drm/xe/xe_query: Use separate iterator while filling GT list
Cavitt, Jonathan
jonathan.cavitt at intel.com
Tue Jul 1 14:05:21 UTC 2025
-----Original Message-----
From: Roper, Matthew D <matthew.d.roper at intel.com>
Sent: Monday, June 30, 2025 3:15 PM
To: intel-xe at lists.freedesktop.org
Cc: Roper, Matthew D <matthew.d.roper at intel.com>; Cavitt, Jonathan <jonathan.cavitt at intel.com>
Subject: [PATCH v3.1 6/6] drm/xe/xe_query: Use separate iterator while filling GT list
>
> The 'id' value updated by for_each_gt() is the uapi GT ID of the GTs
> being iterated over, and may skip over values if a GT is not present on
> the device. Use a separate iterator for GT list array assignments to
> ensure that the array will be filled properly on future platforms where
> index in the GT query list may not match the uapi ID.
>
> v2:
> - Include the missing increment of the iterator. (Jonathan)
>
> Cc: Jonathan Cavitt <jonathan.cavitt at intel.com>
> Signed-off-by: Matt Roper <matthew.d.roper at intel.com>
This is just to prevent the query ioctl from changing its behavior
after the update, yes?
If that's the case:
Reviewed-by: Jonathan Cavitt <jonathan.cavitt at intel.com>
-Jonathan Cavitt
> ---
> drivers/gpu/drm/xe/xe_query.c | 27 +++++++++++++++------------
> 1 file changed, 15 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
> index e615b0916217..d517ec9ddcbf 100644
> --- a/drivers/gpu/drm/xe/xe_query.c
> +++ b/drivers/gpu/drm/xe/xe_query.c
> @@ -368,6 +368,7 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
> struct drm_xe_query_gt_list __user *query_ptr =
> u64_to_user_ptr(query->data);
> struct drm_xe_query_gt_list *gt_list;
> + int iter = 0;
> u8 id;
>
> if (query->size == 0) {
> @@ -385,12 +386,12 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
>
> for_each_gt(gt, xe, id) {
> if (xe_gt_is_media_type(gt))
> - gt_list->gt_list[id].type = DRM_XE_QUERY_GT_TYPE_MEDIA;
> + gt_list->gt_list[iter].type = DRM_XE_QUERY_GT_TYPE_MEDIA;
> else
> - gt_list->gt_list[id].type = DRM_XE_QUERY_GT_TYPE_MAIN;
> - gt_list->gt_list[id].tile_id = gt_to_tile(gt)->id;
> - gt_list->gt_list[id].gt_id = gt->info.id;
> - gt_list->gt_list[id].reference_clock = gt->info.reference_clock;
> + gt_list->gt_list[iter].type = DRM_XE_QUERY_GT_TYPE_MAIN;
> + gt_list->gt_list[iter].tile_id = gt_to_tile(gt)->id;
> + gt_list->gt_list[iter].gt_id = gt->info.id;
> + gt_list->gt_list[iter].reference_clock = gt->info.reference_clock;
> /*
> * The mem_regions indexes in the mask below need to
> * directly identify the struct
> @@ -406,19 +407,21 @@ static int query_gt_list(struct xe_device *xe, struct drm_xe_device_query *query
> * assumption.
> */
> if (!IS_DGFX(xe))
> - gt_list->gt_list[id].near_mem_regions = 0x1;
> + gt_list->gt_list[iter].near_mem_regions = 0x1;
> else
> - gt_list->gt_list[id].near_mem_regions =
> + gt_list->gt_list[iter].near_mem_regions =
> BIT(gt_to_tile(gt)->id) << 1;
> - gt_list->gt_list[id].far_mem_regions = xe->info.mem_region_mask ^
> - gt_list->gt_list[id].near_mem_regions;
> + gt_list->gt_list[iter].far_mem_regions = xe->info.mem_region_mask ^
> + gt_list->gt_list[iter].near_mem_regions;
>
> - gt_list->gt_list[id].ip_ver_major =
> + gt_list->gt_list[iter].ip_ver_major =
> REG_FIELD_GET(GMD_ID_ARCH_MASK, gt->info.gmdid);
> - gt_list->gt_list[id].ip_ver_minor =
> + gt_list->gt_list[iter].ip_ver_minor =
> REG_FIELD_GET(GMD_ID_RELEASE_MASK, gt->info.gmdid);
> - gt_list->gt_list[id].ip_ver_rev =
> + gt_list->gt_list[iter].ip_ver_rev =
> REG_FIELD_GET(GMD_ID_REVID, gt->info.gmdid);
> +
> + iter++;
> }
>
> if (copy_to_user(query_ptr, gt_list, size)) {
> --
> 2.49.0
>
>
More information about the Intel-xe
mailing list