[PATCH 2/2] drm/xe/exec: reserve fence slot for CPU bind
Matthew Auld
matthew.auld at intel.com
Wed Dec 13 17:47:05 UTC 2023
Looks possible to switch from CPU binding to GPU binding mid exec, and
if that happens for the same dma-resv we might use two fence slots, once
for the dummy fence, and another for the actual GPU bind.
References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/698
Signed-off-by: Matthew Auld <matthew.auld at intel.com>
Cc: Thomas Hellström <thomas.hellstrom at linux.intel.com>
Cc: Matthew Brost <matthew.brost at intel.com>
---
drivers/gpu/drm/xe/xe_exec.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_exec.c b/drivers/gpu/drm/xe/xe_exec.c
index 63e82e5285bc..0c78a377f453 100644
--- a/drivers/gpu/drm/xe/xe_exec.c
+++ b/drivers/gpu/drm/xe/xe_exec.c
@@ -107,12 +107,14 @@ static int xe_exec_fn(struct drm_gpuvm_exec *vm_exec)
return ret;
/*
- * 1 fence slot for the final submit, and one more for every per-tile
- * bind. Note that there are potentially many vma per object/dma-resv,
- * however the fence slot will just be re-used, since they are largely
- * the same timeline and the seqno should be in order.
+ * 1 fence slot for the final submit, and 1 more for every per-tile for
+ * GPU bind and 1 extra for CPU bind. Note that there are potentially
+ * many vma per object/dma-resv, however the fence slot will just be
+ * re-used, since they are largely the same timeline and the seqno
+ * should be in order. In the case of CPU bind there is dummy fence used
+ * for all CPU binds, so no need to have a per-tile slot for that.
*/
- num_fences = 1 + vm->xe->info.tile_count;
+ num_fences = 1 + 1 + vm->xe->info.tile_count;
/*
* We don't know upfront exactly how many fence slots we will need at
--
2.43.0
More information about the Intel-xe
mailing list