[PATCH 3/3] drm/xe/vf: Fix guc_info debugfs for VFs
Daniele Ceraolo Spurio
daniele.ceraolospurio at intel.com
Wed Apr 23 17:40:40 UTC 2025
On 4/23/2025 3:37 AM, Laguna, Lukasz wrote:
>
> On 4/8/2025 23:31, Daniele Ceraolo Spurio wrote:
>> The guc_info debugfs attempta to read a bunch of registers that the VFs
>
> typo: attempta/attempt
>
>> doesn't have access to, so fix it by skipping the reads.
>>
>> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio at intel.com>
>> Cc: Michal Wajdeczko <michal.wajdeczko at intel.com>
>> Cc: Lukasz Laguna <lukasz.laguna at intel.com>
>
> Reviewed-by: Lukasz Laguna <lukasz.laguna at intel.com>
Thanks. I've re-sent this patch for CI individually since it can be
merged without the other 2.
Daniele
>
>> ---
>> drivers/gpu/drm/xe/xe_guc.c | 44 +++++++++++++++++++------------------
>> 1 file changed, 23 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c
>> index 38866135c019..17932006619a 100644
>> --- a/drivers/gpu/drm/xe/xe_guc.c
>> +++ b/drivers/gpu/drm/xe/xe_guc.c
>> @@ -1509,30 +1509,32 @@ void xe_guc_print_info(struct xe_guc *guc,
>> struct drm_printer *p)
>> xe_uc_fw_print(&guc->fw, p);
>> - fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>> - if (!fw_ref)
>> - return;
>> + if (!IS_SRIOV_VF(gt_to_xe(gt))) {
>> + fw_ref = xe_force_wake_get(gt_to_fw(gt), XE_FW_GT);
>> + if (!fw_ref)
>> + return;
>> +
>> + status = xe_mmio_read32(>->mmio, GUC_STATUS);
>> +
>> + drm_printf(p, "\nGuC status 0x%08x:\n", status);
>> + drm_printf(p, "\tBootrom status = 0x%x\n",
>> + REG_FIELD_GET(GS_BOOTROM_MASK, status));
>> + drm_printf(p, "\tuKernel status = 0x%x\n",
>> + REG_FIELD_GET(GS_UKERNEL_MASK, status));
>> + drm_printf(p, "\tMIA Core status = 0x%x\n",
>> + REG_FIELD_GET(GS_MIA_MASK, status));
>> + drm_printf(p, "\tLog level = %d\n",
>> + xe_guc_log_get_level(&guc->log));
>> +
>> + drm_puts(p, "\nScratch registers:\n");
>> + for (i = 0; i < SOFT_SCRATCH_COUNT; i++) {
>> + drm_printf(p, "\t%2d: \t0x%x\n",
>> + i, xe_mmio_read32(>->mmio, SOFT_SCRATCH(i)));
>> + }
>> - status = xe_mmio_read32(>->mmio, GUC_STATUS);
>> -
>> - drm_printf(p, "\nGuC status 0x%08x:\n", status);
>> - drm_printf(p, "\tBootrom status = 0x%x\n",
>> - REG_FIELD_GET(GS_BOOTROM_MASK, status));
>> - drm_printf(p, "\tuKernel status = 0x%x\n",
>> - REG_FIELD_GET(GS_UKERNEL_MASK, status));
>> - drm_printf(p, "\tMIA Core status = 0x%x\n",
>> - REG_FIELD_GET(GS_MIA_MASK, status));
>> - drm_printf(p, "\tLog level = %d\n",
>> - xe_guc_log_get_level(&guc->log));
>> -
>> - drm_puts(p, "\nScratch registers:\n");
>> - for (i = 0; i < SOFT_SCRATCH_COUNT; i++) {
>> - drm_printf(p, "\t%2d: \t0x%x\n",
>> - i, xe_mmio_read32(>->mmio, SOFT_SCRATCH(i)));
>> + xe_force_wake_put(gt_to_fw(gt), fw_ref);
>> }
>> - xe_force_wake_put(gt_to_fw(gt), fw_ref);
>> -
>> drm_puts(p, "\n");
>> xe_guc_ct_print(&guc->ct, p, false);
More information about the Intel-xe
mailing list