[PATCH 6/7] drm/xe/gt: Drop third submission for default context
Tvrtko Ursulin
tvrtko.ursulin at igalia.com
Wed Jul 9 07:34:10 UTC 2025
On 08/07/2025 05:59, Matthew Brost wrote:
> On Mon, Jul 07, 2025 at 09:55:58PM -0500, Lucas De Marchi wrote:
>> On Fri, Jul 04, 2025 at 12:21:50PM +0100, Tvrtko Ursulin wrote:
>>>
>>> On 03/07/2025 23:41, Lucas De Marchi wrote:
>>>> There's no need to submit the nop job again on the first queue. Any
>>>> state needed is already saved when the first LRC is switched out. The
>>>> comment is a little misleading regarding indirect W/A: first of all
>>>> there's still no indirect W/A enabled and secondly, even after they are,
>>>> there's no need to submit this job again for having their state
>>>> propagated: the indirect W/A will actually run on every LRC switch.
>>>>
>>>> Signed-off-by: Lucas De Marchi <lucas.demarchi at intel.com>
>>>> ---
>>>> drivers/gpu/drm/xe/xe_gt.c | 8 --------
>>>> 1 file changed, 8 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
>>>> index 67425e37c2187..439e7c703ed84 100644
>>>> --- a/drivers/gpu/drm/xe/xe_gt.c
>>>> +++ b/drivers/gpu/drm/xe/xe_gt.c
>>>> @@ -361,14 +361,6 @@ int xe_gt_record_default_lrcs(struct xe_gt *gt)
>>>> goto put_nop_q;
>>>> }
>>>> - /* Reload golden LRC to record the effect of any indirect W/A */
>>>> - err = emit_nop_job(gt, q);
>>>> - if (err) {
>>>> - xe_gt_err(gt, "hwe %s: emit_nop_job failed (%pe) guc_id=%u\n",
>>>> - hwe->name, ERR_PTR(err), q->guc->id);
>>>> - goto put_nop_q;
>>>> - }
>>>> -
>>>> xe_map_memcpy_from(xe, default_lrc,
>>>> &q->lrc[0]->bo->vmap,
>>>> xe_lrc_pphwsp_offset(q->lrc[0]),
>>>>
>>>
>>> Wasn't it also racy to memcpy from q's LRC without guaranteeing context
>>> save had completed? I don't think dma_fence_wait in emit_nop_job
>>> guarantees it. If that is so this patch should actually have Fixes:
>>> added and commit message adjusted accordingly.
>>
>> I don't think it really fixes anything, it's just pointless to do it.
>> It would just save the same information from the first time it executed
>> even if there was a race.
>>
>
> Agree with Lucas, this pointless yet harmless.
If you guys are certain the way hardware saves the context has no
opportunity to make the memcpy see a "corrupt" state then okay.
Otherwise I was thinking split/incomplete qw writes when context save
races with the memcpy. If for example they would be written by 2x dw by
the hw. Or whether context save has any guarantees on the order of
writes. Might be all sprinkled out by units randomly. Something along
those lines.
Regards,
Tvrtko
More information about the Intel-xe
mailing list