[PATCH v2 2/6] drm/xe: Use iosys_map helpers for WA BB emission
Lucas De Marchi
lucas.demarchi at intel.com
Mon Jun 2 15:50:59 UTC 2025
On Mon, Jun 02, 2025 at 04:22:47PM +0100, Tvrtko Ursulin wrote:
>
>On 02/06/2025 15:56, Lucas De Marchi wrote:
>>On Mon, Jun 02, 2025 at 12:19:52PM +0100, Tvrtko Ursulin wrote:
>>>To properly support discrete GPUs on all platforms it is required to use
>>>the iosys_map helpers.
>>>
>>>To fix we emit the WA BB into an on stack buffer and copy it over using
>>>xe_map_memcpy_to().
>>
>>For https://lore.kernel.org/intel-xe/20250523-wa-bb-cmds-
>>v1-0-40b337f71bcd at intel.com
>>we will extend the WA BB to other things (and that is the first
>>additional user). I don't think we can keep it on stack. For my next
>>version I was adding the patch below to allocate the buffer. Let me know
>>what you think.
>
>Works for me I think.
>
>I mean I don't exactly see yet how you will make it work for multiple
>users adding stuff to the same wa bb, like will you consider a shared
this is done via configfs that is read on probe only.... mainly intended
for testing new workarounds and tuning before changing anything in
xe.ko.
>buffer passed down from the level up or what, but in any case I can
>probably adapt my series quite easily.
>
>In summary, you suggest my series waits for yours to land first?
Let me post this fix separately.
Lucas De Marchi
>
>Regards,
>
>Tvrtko
>
>>commit 28dbdf201133d92b0e0b0a0139eae7fe5eb9e33b
>>Author: Lucas De Marchi <lucas.demarchi at intel.com>
>>Date: Fri May 30 16:29:51 2025 -0700
>>
>> drm/xe/lrc: Use a temporary buffer for WA BB
>> In case the BO is in iomem, we can't simply take the vaddr and
>>write to
>> it. Instead, prepare a separate buffer that is later copied into io
>> memory. Right now it's just a few words that could be using
>> xe_map_write32(), but the intention is to grow the WA BB for other
>> uses.
>> Fixes: 82b98cadb01f ("drm/xe: Add WA BB to capture active
>>context utilization")
>> Cc: Umesh Nerlige Ramappa <umesh.nerlige.ramappa at intel.com>
>> Signed-off-by: Lucas De Marchi <lucas.demarchi at intel.com>
>>
>>diff --git a/drivers/gpu/drm/xe/xe_lrc.c b/drivers/gpu/drm/xe/xe_lrc.c
>>index 63d74e27f54cf..1b835d7efca2b 100644
>>--- a/drivers/gpu/drm/xe/xe_lrc.c
>>+++ b/drivers/gpu/drm/xe/xe_lrc.c
>>@@ -941,11 +941,18 @@ static void xe_lrc_finish(struct xe_lrc *lrc)
>> * store it in the PPHSWP.
>> */
>> #define CONTEXT_ACTIVE 1ULL
>>-static void xe_lrc_setup_utilization(struct xe_lrc *lrc)
>>+static int xe_lrc_setup_utilization(struct xe_lrc *lrc)
>> {
>>- u32 *cmd;
>>+ u32 *cmd, *buf = NULL;
>>
>>- cmd = lrc->bb_per_ctx_bo->vmap.vaddr;
>>+ if (lrc->bb_per_ctx_bo->vmap.is_iomem) {
>>+ buf = kmalloc(lrc->bb_per_ctx_bo->size, GFP_KERNEL);
>>+ if (!buf)
>>+ return -ENOMEM;
>>+ cmd = buf;
>>+ } else {
>>+ cmd = lrc->bb_per_ctx_bo->vmap.vaddr;
>>+ }
>>
>> *cmd++ = MI_STORE_REGISTER_MEM | MI_SRM_USE_GGTT |
>>MI_SRM_ADD_CS_OFFSET;
>> *cmd++ = ENGINE_ID(0).addr;
>>@@ -966,9 +973,16 @@ static void xe_lrc_setup_utilization(struct
>>xe_lrc *lrc)
>>
>> *cmd++ = MI_BATCH_BUFFER_END;
>>
>>+ if (buf) {
>>+ xe_map_memcpy_to(gt_to_xe(lrc->gt), &lrc->bb_per_ctx_bo->vmap, 0,
>>+ buf, cmd - buf);
>>+ kfree(buf);
>>+ }
>>+
>> xe_lrc_write_ctx_reg(lrc, CTX_BB_PER_CTX_PTR,
>> xe_bo_ggtt_addr(lrc->bb_per_ctx_bo) | 1);
>>
>>+ return 0;
>> }
>>
>> #define PVC_CTX_ASID (0x2e + 1)
>>@@ -1125,7 +1139,9 @@ static int xe_lrc_init(struct xe_lrc *lrc,
>>struct xe_hw_engine *hwe,
>> map = __xe_lrc_start_seqno_map(lrc);
>> xe_map_write32(lrc_to_xe(lrc), &map, lrc->fence_ctx.next_seqno - 1);
>>
>>- xe_lrc_setup_utilization(lrc);
>>+ err = xe_lrc_setup_utilization(lrc);
>>+ if (err)
>>+ goto err_lrc_finish;
>>
>> return 0;
>>
>>Lucas De Marchi
>
More information about the Intel-xe
mailing list