[PATCH v3 3/3] drm/xe/vf: Fixup CTB send buffer messages after migration

Lis, Tomasz tomasz.lis at intel.com
Tue Dec 3 18:21:58 UTC 2024


On 28.11.2024 00:21, Michal Wajdeczko wrote:
> On 23.11.2024 04:13, Tomasz Lis wrote:
>> During post-migration recovery of a VF, it in necessary to update
> typo: it is ?
right
>> GGTT references included in messages which are going to be sent
>> to GuC. GuC will start consuming messages after VF KMD will inform
>> it about fixups being done; before that, the VF KMD is expected
>> to update any H2G messages which are already in send buffer but
>> were not consumed by GuC.
>>
>> Only a small subset of messages allowed for VFs have GGTT references
>> in them. This patch adds the functionality to parse the CTB send
>> ring buffer and shift addresses contained within.
>>
>> While fixing the CTB content, ct->lock is not taken. This means
>> the only barrier taken remains GGTT address lock - which is ok,
>> because only requests with GGTT addresses matter, but it also means
>> tail changes can happen during the CTB fixups execution (which may
>> be ignored as any new messages will not have anything to fix).
> are you sure that there will be no inflight h2g actions during fixup?
>
> 	CPU1			CPU2
>
> 	ggtt.lock()
> 	h2g.a = ggtt.a
> 	ggtt.unlock()
> 				ggtt.lock()
> 				ggtt.a += shift
> 				ctb.fixups(shift)
> 				ggtt.unlock()
> 	ctb.send(h2g)
>
While we cannot rule out completely a GuC being active during restore,

such situation is extremely unlikely and we accept it as being prone to 
failures.

We do not guarantee that we can successfully handle such situation. What

we have to guarantee is that the situation will not have bad consequences

beyond the driver wedge. So, we cannot cause deadlock. But sending invalid

request is not a problem -- again, in this extremely unlikely situation 
of GuC

being active, which could only be caused by very fast second migration just

after first migration was finalized, and at very unfortunate timing of 
millisecond

precision. It's a corner case of a corner case.


Also please note that in this comment I'm not referring to ggtt access 
lock which

we already have, but to GGTT address lock which was not even introduced yet.


>> The GGTT address locking will be introduced in a future series.
>>
>> v2: removed storing shif as that's now done in VMA nodes patch;
>>    macros to inlines; warns to asserts; log messages fixes (Michal)
>>
>> Signed-off-by: Tomasz Lis<tomasz.lis at intel.com>
>> ---
>>   drivers/gpu/drm/xe/xe_guc_ct.c   | 148 +++++++++++++++++++++++++++++++
>>   drivers/gpu/drm/xe/xe_guc_ct.h   |   2 +
>>   drivers/gpu/drm/xe/xe_sriov_vf.c |  16 ++++
>>   3 files changed, 166 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
>> index 7eb175a0b874..212a6795ec8b 100644
>> --- a/drivers/gpu/drm/xe/xe_guc_ct.c
>> +++ b/drivers/gpu/drm/xe/xe_guc_ct.c
>> @@ -84,6 +84,8 @@ struct g2h_fence {
>>   	bool done;
>>   };
>>   
>> +#define make_u64(hi, lo) ((u64)((u64)(u32)(hi) << 32 | (u32)(lo)))
>> +
>>   static void g2h_fence_init(struct g2h_fence *g2h_fence, u32 *response_buffer)
>>   {
>>   	g2h_fence->response_buffer = response_buffer;
>> @@ -1620,6 +1622,152 @@ static void g2h_worker_func(struct work_struct *w)
>>   	receive_g2h(ct);
>>   }
>>   
>> +static inline u32 ctb_read32(struct xe_device *xe, struct iosys_map *cmds,
> drop "inline" and let compiler do the job

That keyword does not prevent any compiler from doing its job. The 
compiler still

decides whether it inlines or not, this just alters the wages used to 
make the decision.

Regardless, sure - can remove. It will get inlined either way.

>> +			     u32 head, u32 pos)
>> +{
>> +	u32 msg[1];
>> +
>> +	xe_map_memcpy_from(xe, msg, cmds, (head + pos) * sizeof(u32),
>> +			   1 * sizeof(u32));
>> +	return msg[0];
>> +}
>> +
>> +static inline void ctb_fixup64(struct xe_device *xe, struct iosys_map *cmds,
> same here
>
>> +			       u32 head, u32 pos, s64 shift)
>> +{
>> +	u32 msg[2];
>> +	u64 offset;
>> +
>> +	xe_map_memcpy_from(xe, msg, cmds, (head + pos) * sizeof(u32),
>> +			   2 * sizeof(u32));
>> +	offset = make_u64(msg[1], msg[0]);
>> +	offset += shift;
>> +	msg[0] = lower_32_bits(offset);
>> +	msg[1] = upper_32_bits(offset);
>> +	xe_map_memcpy_to(xe, cmds, (head + pos) * sizeof(u32), msg, 2 * sizeof(u32));
>> +}
>> +
>> +/*
>> + * ct_update_addresses_in_message - Shift any GGTT addresses within
>> + * a single message left within CTB from before post-migration recovery.
>> + * @ct: pointer to CT struct of the target GuC
>> + * @cmds: iomap buffer containing CT messages
>> + * @head: start of the target message within the buffer
>> + * @len: length of the target message
>> + * @size: size of the commands buffer
>> + * @shift: the address shift to be added to each GGTT reference
>> + */
>> +static void ct_update_addresses_in_message(struct xe_guc_ct *ct,
>> +					   struct iosys_map *cmds, u32 head,
>> +					   u32 len, u32 size, s64 shift)
>> +{
>> +	struct xe_device *xe = ct_to_xe(ct);
>> +	u32 action, i, n;
>> +	u32 msg[1];
>> +
>> +	xe_map_memcpy_from(xe, msg, cmds, head * sizeof(u32),
>> +			   1 * sizeof(u32));
>> +	action = FIELD_GET(GUC_HXG_REQUEST_MSG_0_ACTION, msg[0]);
>> +	switch (action) {
>> +	case XE_GUC_ACTION_REGISTER_CONTEXT:
>> +	case XE_GUC_ACTION_REGISTER_CONTEXT_MULTI_LRC:
>> +		/* field wq_desc */
>> +		ctb_fixup64(xe, cmds, head, 5, shift);
> can we add some defines in abi/ files for these magic offsets?

The code will not by cleaner this way. But sure, will introduce some enums.

(based on current look, the abi file seem to prefer enums over defines)

>> +		/* field wq_base */
>> +		ctb_fixup64(xe, cmds, head, 7, shift);
>> +		if (action == XE_GUC_ACTION_REGISTER_CONTEXT_MULTI_LRC) {
>> +			/* field number_children */
>> +			n = ctb_read32(xe, cmds, head, 10);
>> +			/* field hwlrca and child lrcas */
>> +			for (i = 0; i < n; i++)
>> +				ctb_fixup64(xe, cmds, head, 11 + 2 * i, shift);
>> +		} else {
>> +			/* field hwlrca */
>> +			ctb_fixup64(xe, cmds, head, 10, shift);
>> +		}
>> +		break;
>> +	default:
>> +		break;
>> +	}
>> +}
>> +
>> +static int ct_update_addresses_in_buffer(struct xe_guc_ct *ct,
>> +					 struct guc_ctb *h2g,
>> +					 s64 shift, u32 *mhead, s32 avail)
>> +{
>> +	struct xe_device *xe = ct_to_xe(ct);
>> +	u32 head = *mhead;
>> +	u32 size = h2g->info.size;
>> +	u32 msg[1];
>> +	u32 len;
>> +
>> +	/* Read header */
>> +	xe_map_memcpy_from(xe, msg, &h2g->cmds, sizeof(u32) * head,
>> +			   sizeof(u32));
>> +	len = FIELD_GET(GUC_CTB_MSG_0_NUM_DWORDS, msg[0]) + GUC_CTB_MSG_MIN_LEN;
>> +
>> +	if (unlikely(len > (u32)avail)) {
>> +		struct xe_gt *gt = ct_to_gt(ct);
>> +
>> +		xe_gt_err(gt, "H2G channel broken on read, avail=%d, len=%d, fixups skipped\n",
>> +			  avail, len);
>> +		return 0;
>> +	}
>> +
>> +	head = (head + 1) % size;
>> +	ct_update_addresses_in_message(ct, &h2g->cmds, head, len - 1, size, shift);
>> +	*mhead = (head + len - 1) % size;
>> +
>> +	return avail - len;
>> +}
>> +
>> +/**
>> + * xe_guc_ct_update_addresses - Fixup any pending H2G CTB messages by updating
>> + * GGTT offsets in their payloads.
>> + * @ct: pointer to CT struct of the target GuC
>> + * @ggtt_shift: shift to be added to all GGTT adresses within the CTB
> typo: addresses
>
> Return: ?
to the void with it.
>> + */
>> +int xe_guc_ct_update_addresses(struct xe_guc_ct *ct, s64 ggtt_shift)
>> +{
>> +	struct xe_guc *guc = ct_to_guc(ct);
>> +	struct xe_gt *gt = guc_to_gt(guc);
>> +	struct guc_ctb *h2g = &ct->ctbs.h2g;
>> +	u32 head = h2g->info.head;
>> +	u32 tail = READ_ONCE(h2g->info.tail);
>> +	u32 size = h2g->info.size;
>> +	s32 avail;
> should we care if CTB was disabled when the VF was migrated?

If CTBs were not registered during migration, then we shouldn't have 
received an IRQ.

If a situation happened when CTBs were not registered and we did got an 
IRQ (which means

the user must have been doing some unusual stuff within the VM), we 
should finish the

migration by reset. But that is not a part of this patch.

>> +
>> +	if (unlikely(h2g->info.broken))
>> +		return -EPIPE;
> if it was broken before migration then maybe fixup is not needed and we
> should return a success here?

The fixups for CTB are not needed, but the rest still has to be applied.

We are continuing the fixups regardless of error codes from previous 
ones, except

for the VMAs fixups for which error path was altered after previous 
round of review.

>> +
>> +	xe_gt_assert(gt, head > size);
> hmm, is this correct?
right, my mistake
>> +
>> +	if (unlikely(tail >= size)) {
>> +		xe_gt_err(gt, "H2G channel has invalid tail offset (%u >= %u)\n",
>> +			 tail, size);
> maybe just extend xe_gt_err() below with extra info instead of this
> split into two error messages that is from one error case anyway?
>
> btw, I hope that we will have a clear indication that such error was
> found during recovery (as now it looks like generic CTB error)
will merge.
>> +		goto corrupted;
>> +	}
>> +
>> +	avail = tail - head;
>> +
>> +	/* beware of buffer wrap case */
>> +	if (unlikely(avail < 0))
>> +		avail += size;
>> +	xe_gt_dbg(gt, "available %d (%u:%u:%u)\n", avail, head, tail, size);
>> +	XE_WARN_ON(avail < 0);
> use xe_gt_assert() instead
ok.
>> +
>> +	while (avail > 0)
>> +		avail = ct_update_addresses_in_buffer(ct, h2g, ggtt_shift, &head, avail);
>> +
>> +	return 0;
>> +
>> +corrupted:
>> +	xe_gt_err(gt, "Corrupted descriptor head=%u tail=%u\n",
>> +		 head, tail);
> no need to wrap (yet)
>
>> +	h2g->info.broken = true;
>> +	return -EPIPE;
>> +}
>> +
>>   static struct xe_guc_ct_snapshot *guc_ct_snapshot_alloc(struct xe_guc_ct *ct, bool atomic,
>>   							bool want_ctb)
>>   {
>> diff --git a/drivers/gpu/drm/xe/xe_guc_ct.h b/drivers/gpu/drm/xe/xe_guc_ct.h
>> index 82c4ae458dda..25e5ee71d853 100644
>> --- a/drivers/gpu/drm/xe/xe_guc_ct.h
>> +++ b/drivers/gpu/drm/xe/xe_guc_ct.h
>> @@ -22,6 +22,8 @@ void xe_guc_ct_snapshot_print(struct xe_guc_ct_snapshot *snapshot, struct drm_pr
>>   void xe_guc_ct_snapshot_free(struct xe_guc_ct_snapshot *snapshot);
>>   void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p, bool want_ctb);
>>   
>> +int xe_guc_ct_update_addresses(struct xe_guc_ct *ct, s64 ggtt_shift);
> nit: maybe ... update_messages() or fixup_messages_with_ggtt() ?
ok, will rename.
>> +
>>   static inline bool xe_guc_ct_enabled(struct xe_guc_ct *ct)
>>   {
>>   	return ct->state == XE_GUC_CT_STATE_ENABLED;
>> diff --git a/drivers/gpu/drm/xe/xe_sriov_vf.c b/drivers/gpu/drm/xe/xe_sriov_vf.c
>> index 4ee8fc70a744..1cb8878e6fad 100644
>> --- a/drivers/gpu/drm/xe/xe_sriov_vf.c
>> +++ b/drivers/gpu/drm/xe/xe_sriov_vf.c
>> @@ -10,6 +10,7 @@
>>   #include "xe_gt.h"
>>   #include "xe_gt_sriov_printk.h"
>>   #include "xe_gt_sriov_vf.h"
>> +#include "xe_guc_ct.h"
>>   #include "xe_pm.h"
>>   #include "xe_sriov.h"
>>   #include "xe_sriov_printk.h"
>> @@ -158,6 +159,18 @@ static int vf_post_migration_requery_guc(struct xe_device *xe)
>>   	return ret;
>>   }
>>   
>> +static void vf_post_migration_fixup_ctb(struct xe_device *xe)
>> +{
>> +	struct xe_gt *gt;
>> +	unsigned int id;
>> +
> since you are accessing sriov.vf fields directly, maybe worth to add
>
> 	xe_assert(xe, IS_SRIOV_VF(xe))
can do. Even though it's a static function.
>> +	for_each_gt(gt, xe, id) {
>> +		struct xe_gt_sriov_vf_selfconfig *config = &gt->sriov.vf.self_config;
>> +
>> +		xe_guc_ct_update_addresses(&gt->uc.guc.ct, config->ggtt_shift);
> this function returns int, shouldn't we check for errors?
> if not then maybe make it void
ok, will void it then.
>> +	}
>> +}
>> +
>>   /*
>>    * vf_post_migration_imminent - Check if post-restore recovery is coming.
>>    * @xe: the &xe_device struct instance
>> @@ -224,6 +237,9 @@ static void vf_post_migration_recovery(struct xe_device *xe)
>>   
>>   	err = vf_post_migration_fixup_ggtt_nodes(xe);
>>   	/* FIXME: add the recovery steps */
> will be more recovery steps _here_ or below ?

here. The comment is at proper place.

-Tomasz

>> +	if (err != ENODATA)
>> +		vf_post_migration_fixup_ctb(xe);
>> +
>>   	vf_post_migration_notify_resfix_done(xe);
>>   	xe_pm_runtime_put(xe);
>>   	drm_notice(&xe->drm, "migration recovery ended\n");


More information about the Intel-xe mailing list