[PATCH] drm/xe/guc: Configure TLB timeout based on CT buffer size

Nirmoy Das nirmoy.das at intel.com
Wed Jun 26 07:51:28 UTC 2024


Hi Matt,

On 6/26/2024 9:41 AM, Matthew Brost wrote:
> On Wed, Jun 26, 2024 at 09:33:46AM +0200, Nirmoy Das wrote:
>> Hi Matt,
>>
>> On 6/26/2024 12:12 AM, Matthew Brost wrote:
>>> On Tue, Jun 25, 2024 at 10:49:47AM +0200, Nirmoy Das wrote:
>>>> GuC TLB invalidation depends on GuC to process the request from the CT
>>>> queue and then the real time to invalidate TLB. Add a function to return
>>>> overestimated possible time a TLB inval H2G might take which can be used
>>>> as timeout value for TLB invalidation wait time.
>>>>
>>> Not reviewing this patch as some reviews seem to be inflight, just
>>> adding some thoughts. I will say that this patch looks correct as a
>>> short term fix.
>>>
>>> Longterm I think we need explore coalescing TLB invalidations targeting
>>> the same VM when pressure exists (VM bind case, [1] should help here
>>> a bit) or
>> I assume you mean to queue tlb requests in kernel for sometime and then
>> coalesce before sending.
>>
> Yes exactly. A rough idea would be:
>
> - Have a water mark between TLB invalidation seqno of send / recv
> - If difference between send / recv is higher than the water mark, start
>    holding TLB invalidations in the kernel coalescing them in each VM
> - Once we drop below another water mark of send / recv, issue all
>    coalesced TLB invalidations
>
>>>    optimize out invalidations
>> What do you mean ? Queue ggtt invalidation and send only one ?
>>>    (GGTT case, at one point I had
>>> logic in for this but pulled it out as it was buggy).
>>>
>>> I say this because when debugging [2] I found that lots of TLB
>>> invalidations can overwhelm the GuC to the point where it can barely
>>> make forward progess on submissions.
>> This sounds very serious!
>>> The former is likely a fairly large refactor, while the latter shouldn't
>>> be too difficult.
>>>
>>> Something for us to keep in mind as a group.
>> Created https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/2162 to track
>> it.
>>
> Great! Thanks! Maybe add some of comments from this reply to that?

Yes, the above idea needs to be documented in the issue, will do that.


Thanks,

Nirmoy

>
> Matt
>
>> Thanks,
>>
>> Nirmoy
>>
>>> Matt
>>>
>>> [1] https://patchwork.freedesktop.org/series/133034/
>>> [2] https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/799#note_2449497https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/799#note_2449497
>>>
>>>> Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/1622
>>>> Cc: Matthew Brost <matthew.brost at intel.com>
>>>> Suggested-by: Daniele Ceraolo Spurio <daniele.ceraolospurio at intel.com>
>>>> Signed-off-by: Nirmoy Das <nirmoy.das at intel.com>
>>>> ---
>>>>    drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c |  2 +-
>>>>    drivers/gpu/drm/xe/xe_guc_ct.c              | 12 ++++++++++++
>>>>    drivers/gpu/drm/xe/xe_guc_ct.h              |  2 ++
>>>>    3 files changed, 15 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>>>> index e1f1ccb01143..fa61070d6201 100644
>>>> --- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>>>> +++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
>>>> @@ -17,7 +17,7 @@
>>>>    #include "xe_trace.h"
>>>>    #include "regs/xe_guc_regs.h"
>>>> -#define TLB_TIMEOUT	(HZ / 4)
>>>> +#define TLB_TIMEOUT	xe_guc_tlb_timeout_jiffies()
>>>>    static void xe_gt_tlb_fence_timeout(struct work_struct *work)
>>>>    {
>>>> diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
>>>> index b4137fe195a4..e30c0da86acc 100644
>>>> --- a/drivers/gpu/drm/xe/xe_guc_ct.c
>>>> +++ b/drivers/gpu/drm/xe/xe_guc_ct.c
>>>> @@ -112,6 +112,18 @@ ct_to_xe(struct xe_guc_ct *ct)
>>>>    #define CTB_G2H_BUFFER_SIZE	(4 * CTB_H2G_BUFFER_SIZE)
>>>>    #define G2H_ROOM_BUFFER_SIZE	(CTB_G2H_BUFFER_SIZE / 4)
>>>> +/**
>>>> + * xe_guc_tlb_timeout_jiffies - Calculate the maximum time to process a tlb inval command
>>>> + *
>>>> + * This function computes the maximum time to process a tlb inval H2G commands
>>>> + * in jiffies. A 4KB buffer full of commands takes a little over a second to process,
>>>> + * so this time is set to 2 seconds to be safe.
>>>> + */
>>>> +long xe_guc_tlb_timeout_jiffies(void)
>>>> +{
>>>> +	return (CTB_H2G_BUFFER_SIZE * HZ) / SZ_2K;
>>>> +}
>>>> +
>>>>    static size_t guc_ct_size(void)
>>>>    {
>>>>    	return 2 * CTB_DESC_SIZE + CTB_H2G_BUFFER_SIZE +
>>>> diff --git a/drivers/gpu/drm/xe/xe_guc_ct.h b/drivers/gpu/drm/xe/xe_guc_ct.h
>>>> index 105bb8e99a8d..a9755574d6c9 100644
>>>> --- a/drivers/gpu/drm/xe/xe_guc_ct.h
>>>> +++ b/drivers/gpu/drm/xe/xe_guc_ct.h
>>>> @@ -64,4 +64,6 @@ xe_guc_ct_send_block_no_fail(struct xe_guc_ct *ct, const u32 *action, u32 len)
>>>>    	return xe_guc_ct_send_recv_no_fail(ct, action, len, NULL);
>>>>    }
>>>> +long xe_guc_tlb_timeout_jiffies(void);
>>>> +
>>>>    #endif
>>>> -- 
>>>> 2.42.0
>>>>


More information about the Intel-xe mailing list