[Intel-xe] [PATCH v3] drm/xe: Reinstate pipelined fence enable_signaling
Jani Nikula
jani.nikula at linux.intel.com
Wed Sep 27 12:17:19 UTC 2023
On Fri, 15 Sep 2023, Thomas Hellström <thomas.hellstrom at linux.intel.com> wrote:
> With the GPUVA conversion, the xe_bo::vmas member became replaced with
> drm_gem_object::gpuva.list, however there was a couple of usage instances
> left using the old member. Most notably the pipelined fence
> enable_signaling.
>
> Remove the xe_bo::vmas member completely, fix usage instances and
> also enable this pipelined fence enable_signaling even for faulting
> VM:s since we actually wait for bind fences to complete.
>
> v2:
> - Rebase.
> v3:
> - Fix display code build error.
>
> Cc: Matthew Brost <matthew.brost at intel.com>
> Signed-off-by: Thomas Hellström <thomas.hellstrom at linux.intel.com>
> Reviewed-by: Matthew Brost <matthew.brost at intel.com>
> ---
> drivers/gpu/drm/i915/display/intel_fb.c | 2 +-
Commits touching i915 should be separated from the rest, regardless of
leaving a broken commit in the middle. Combining i915 and xe changes
leads to conflicts that need to be addressed when rebasing drm-xe-next
for upstream submission. Separate patches are easier to deal with, and
squash to other patches.
No core xe enabling patch can be sent upstream with i915 changes.
BR,
Jani.
> drivers/gpu/drm/xe/xe_bo.c | 5 ++---
> drivers/gpu/drm/xe/xe_bo_types.h | 2 --
> drivers/gpu/drm/xe/xe_pt.c | 2 +-
> 4 files changed, 4 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/display/intel_fb.c b/drivers/gpu/drm/i915/display/intel_fb.c
> index f5a96b94cfba..d5b9b0255c6a 100644
> --- a/drivers/gpu/drm/i915/display/intel_fb.c
> +++ b/drivers/gpu/drm/i915/display/intel_fb.c
> @@ -2012,7 +2012,7 @@ int intel_framebuffer_init(struct intel_framebuffer *intel_fb,
> * mode when the object is VM_BINDed, so we can only set
> * coherency with display when unbound.
> */
> - if (XE_IOCTL_DBG(dev_priv, !list_empty(&obj->vmas))) {
> + if (XE_IOCTL_DBG(dev_priv, !list_empty(&obj->ttm.base.gpuva.list))) {
> ttm_bo_unreserve(&obj->ttm);
> goto err;
> }
> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> index 27726d4f3423..c5e4d04c4d58 100644
> --- a/drivers/gpu/drm/xe/xe_bo.c
> +++ b/drivers/gpu/drm/xe/xe_bo.c
> @@ -455,7 +455,7 @@ static int xe_bo_trigger_rebind(struct xe_device *xe, struct xe_bo *bo,
>
> dma_resv_assert_held(bo->ttm.base.resv);
>
> - if (!xe_device_in_fault_mode(xe) && !list_empty(&bo->vmas)) {
> + if (!list_empty(&bo->ttm.base.gpuva.list)) {
> dma_resv_iter_begin(&cursor, bo->ttm.base.resv,
> DMA_RESV_USAGE_BOOKKEEP);
> dma_resv_for_each_fence_unlocked(&cursor, fence)
> @@ -1046,7 +1046,7 @@ static void xe_ttm_bo_destroy(struct ttm_buffer_object *ttm_bo)
> drm_prime_gem_destroy(&bo->ttm.base, NULL);
> drm_gem_object_release(&bo->ttm.base);
>
> - xe_assert(xe, list_empty(&bo->vmas));
> + xe_assert(xe, list_empty(&ttm_bo->base.gpuva.list));
>
> if (bo->ggtt_node.size)
> xe_ggtt_remove_bo(bo->tile->mem.ggtt, bo);
> @@ -1229,7 +1229,6 @@ struct xe_bo *__xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo,
> bo->props.preferred_gt = XE_BO_PROPS_INVALID;
> bo->props.preferred_mem_type = XE_BO_PROPS_INVALID;
> bo->ttm.priority = DRM_XE_VMA_PRIORITY_NORMAL;
> - INIT_LIST_HEAD(&bo->vmas);
> INIT_LIST_HEAD(&bo->pinned_link);
>
> drm_gem_private_object_init(&xe->drm, &bo->ttm.base, size);
> diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h
> index 2ea9ad423170..946427fd3fe8 100644
> --- a/drivers/gpu/drm/xe/xe_bo_types.h
> +++ b/drivers/gpu/drm/xe/xe_bo_types.h
> @@ -31,8 +31,6 @@ struct xe_bo {
> struct xe_vm *vm;
> /** @tile: Tile this BO is attached to (kernel BO only) */
> struct xe_tile *tile;
> - /** @vmas: List of VMAs for this BO */
> - struct list_head vmas;
> /** @placements: valid placements for this BO */
> struct ttm_place placements[XE_BO_MAX_PLACEMENTS];
> /** @placement: current placement for this BO */
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index d1e06c913260..ce8d9e9d1b61 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -265,7 +265,7 @@ void xe_pt_destroy(struct xe_pt *pt, u32 flags, struct llist_head *deferred)
> if (!pt)
> return;
>
> - XE_WARN_ON(!list_empty(&pt->bo->vmas));
> + XE_WARN_ON(!list_empty(&pt->bo->ttm.base.gpuva.list));
> xe_bo_unpin(pt->bo);
> xe_bo_put_deferred(pt->bo, deferred);
--
Jani Nikula, Intel
More information about the Intel-xe
mailing list