[PATCH v2] drm/xe: flush gtt before signalling user fence on all engines

Nirmoy Das nirmoy.das at linux.intel.com
Tue May 28 12:41:18 UTC 2024


On 5/28/2024 1:35 PM, Andrzej Hajda wrote:
> On 22.05.2024 09:27, Andrzej Hajda wrote:
>> Tests show that user fence signalling requires kind of write barrier,
>> otherwise not all writes performed by the workload will be available
>> to userspace. It is already done for render and compute, we need it
>> also for the rest: video, gsc, copy.
>>
>> v2: added gsc and copy engines, added fixes and r-b tags
>>
>> Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1488
>> Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel 
>> GPUs")
>> Signed-off-by: Andrzej Hajda <andrzej.hajda at intel.com>
>> Reviewed-by: Matthew Brost <matthew.brost at intel.com>
>> ---
>
> Gently ping.
> The patch is reviewed, needs just merging :)

Merged to drm-xe-next.


Thanks,

Nirmoy

>
> Regards
> Andrzej
>
>> Changes in v2:
>> - Added fixes and r-b tags
>> - Link to v1: 
>> https://lore.kernel.org/r/20240521-xu_flush_vcs_before_ufence-v1-1-ded38b56c8c9@intel.com
>> ---
>> Matthew,
>>
>> I have extended patch to copy and gsc engines. I have kept your r-b,
>> since the change is similar, I hope it is OK.
>> ---
>>   drivers/gpu/drm/xe/xe_ring_ops.c | 8 ++++----
>>   1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c 
>> b/drivers/gpu/drm/xe/xe_ring_ops.c
>> index a3ca718456f6..a46a1257a24f 100644
>> --- a/drivers/gpu/drm/xe/xe_ring_ops.c
>> +++ b/drivers/gpu/drm/xe/xe_ring_ops.c
>> @@ -234,13 +234,13 @@ static void __emit_job_gen12_simple(struct 
>> xe_sched_job *job, struct xe_lrc *lrc
>>         i = emit_bb_start(batch_addr, ppgtt_flag, dw, i);
>>   +    i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, 
>> false, dw, i);
>> +
>>       if (job->user_fence.used)
>>           i = emit_store_imm_ppgtt_posted(job->user_fence.addr,
>>                           job->user_fence.value,
>>                           dw, i);
>>   -    i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, 
>> false, dw, i);
>> -
>>       i = emit_user_interrupt(dw, i);
>>         xe_gt_assert(gt, i <= MAX_JOB_SIZE_DW);
>> @@ -293,13 +293,13 @@ static void __emit_job_gen12_video(struct 
>> xe_sched_job *job, struct xe_lrc *lrc,
>>         i = emit_bb_start(batch_addr, ppgtt_flag, dw, i);
>>   +    i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, 
>> false, dw, i);
>> +
>>       if (job->user_fence.used)
>>           i = emit_store_imm_ppgtt_posted(job->user_fence.addr,
>>                           job->user_fence.value,
>>                           dw, i);
>>   -    i = emit_flush_imm_ggtt(xe_lrc_seqno_ggtt_addr(lrc), seqno, 
>> false, dw, i);
>> -
>>       i = emit_user_interrupt(dw, i);
>>         xe_gt_assert(gt, i <= MAX_JOB_SIZE_DW);
>>
>> ---
>> base-commit: 188ced1e0ff892f0948f20480e2e0122380ae46d
>> change-id: 20240521-xu_flush_vcs_before_ufence-a7b45d94cf33
>>
>> Best regards,
>


More information about the Intel-xe mailing list