[PATCH 03/13] drm/xe: Move migrate to prefetch to op_lock_and_prep function
Zeng, Oak
oak.zeng at intel.com
Tue Apr 23 03:32:26 UTC 2024
> -----Original Message-----
> From: Brost, Matthew <matthew.brost at intel.com>
> Sent: Friday, April 19, 2024 3:53 PM
> To: Zeng, Oak <oak.zeng at intel.com>
> Cc: intel-xe at lists.freedesktop.org
> Subject: Re: [PATCH 03/13] drm/xe: Move migrate to prefetch to
> op_lock_and_prep function
>
> On Thu, Apr 18, 2024 at 01:27:13PM -0600, Zeng, Oak wrote:
> >
> >
> > > -----Original Message-----
> > > From: Brost, Matthew <matthew.brost at intel.com>
> > > Sent: Wednesday, April 10, 2024 1:41 AM
> > > To: intel-xe at lists.freedesktop.org
> > > Cc: Brost, Matthew <matthew.brost at intel.com>; Zeng, Oak
> > > <oak.zeng at intel.com>
> > > Subject: [PATCH 03/13] drm/xe: Move migrate to prefetch to
> > > op_lock_and_prep function
> > >
> > > All non-binding operations in VM bind IOCTL should be in the lock and
> > > prepare step rather than the execution step. Move prefetch to conform
> to
> > > this pattern.
> > >
> > > v2:
> > > - Rebase
> > > - New function names (Oak)
> > > - Update stale comment (Oak)
> > >
> > > Cc: Oak Zeng <oak.zeng at intel.com>
> > > Signed-off-by: Matthew Brost <matthew.brost at intel.com>
> > > ---
> > > drivers/gpu/drm/xe/xe_vm.c | 30 +++++++++++++++---------------
> > > 1 file changed, 15 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> > > index 84c6b10b4b78..2c0521573154 100644
> > > --- a/drivers/gpu/drm/xe/xe_vm.c
> > > +++ b/drivers/gpu/drm/xe/xe_vm.c
> > > @@ -2039,20 +2039,10 @@ static const u32 region_to_mem_type[] = {
> > >
> > > static struct dma_fence *
> > > xe_vm_prefetch(struct xe_vm *vm, struct xe_vma *vma,
> > > - struct xe_exec_queue *q, u32 region,
> > > - struct xe_sync_entry *syncs, u32 num_syncs,
> > > - bool first_op, bool last_op)
> > > + struct xe_exec_queue *q, struct xe_sync_entry *syncs,
> > > + u32 num_syncs, bool first_op, bool last_op)
> >
> >
> > I am wondering, do you still need this function? The original prefetch
> function is migration + vm_bind. Now you moved the migration to
> lock_and_prepare step, only vm bind left...
> >
> > Even if you keep this function, we should change the name... it is not a
> prefetch anymore...
> >
>
> I'd rather leave as is for the following reasons:
>
> 1. The code is slightly different and skip the bind under certain conditions
> 2. It still implements the prefetch op so name applies
> 3. This is just a staging patch and this function gets deleted once a
> version of [1] is merged, I'd rather not squabble / nit pick code that
> is temporary. The goal to not regress behavior while making progress
> towards [1].
Yah, I eventually found this function is deleted in below series... Patch is:
Reviewed-by: Oak Zeng <oak.zeng at intel.com>
>
> Matt
>
> [1] https://patchwork.freedesktop.org/patch/582024/?series=125608&rev=5
>
> > Oak
> >
> > > {
> > > struct xe_exec_queue *wait_exec_queue =
> > > to_wait_exec_queue(vm, q);
> > > - int err;
> > > -
> > > - xe_assert(vm->xe, region < ARRAY_SIZE(region_to_mem_type));
> > > -
> > > - if (!xe_vma_has_no_bo(vma)) {
> > > - err = xe_bo_migrate(xe_vma_bo(vma),
> > > region_to_mem_type[region]);
> > > - if (err)
> > > - return ERR_PTR(err);
> > > - }
> > >
> > > if (vma->tile_mask != (vma->tile_present & ~vma->tile_invalidated))
> > > {
> > > return xe_vm_bind(vm, vma, q, xe_vma_bo(vma), syncs,
> > > num_syncs,
> > > @@ -2592,8 +2582,7 @@ static struct dma_fence *op_execute(struct
> xe_vm
> > > *vm, struct xe_vma *vma,
> > > op->flags & XE_VMA_OP_LAST);
> > > break;
> > > case DRM_GPUVA_OP_PREFETCH:
> > > - fence = xe_vm_prefetch(vm, vma, op->q, op-
> > > >prefetch.region,
> > > - op->syncs, op->num_syncs,
> > > + fence = xe_vm_prefetch(vm, vma, op->q, op->syncs, op-
> > > >num_syncs,
> > > op->flags & XE_VMA_OP_FIRST,
> > > op->flags & XE_VMA_OP_LAST);
> > > break;
> > > @@ -2823,9 +2812,20 @@ static int op_lock_and_prep(struct drm_exec
> > > *exec, struct xe_vm *vm,
> > > false);
> > > break;
> > > case DRM_GPUVA_OP_PREFETCH:
> > > + {
> > > + struct xe_vma *vma = gpuva_to_vma(op->base.prefetch.va);
> > > + u32 region = op->prefetch.region;
> > > +
> > > + xe_assert(vm->xe, region <=
> > > ARRAY_SIZE(region_to_mem_type));
> > > +
> > > err = vma_lock_and_validate(exec,
> > > - gpuva_to_vma(op-
> > > >base.prefetch.va), true);
> > > + gpuva_to_vma(op-
> > > >base.prefetch.va),
> > > + false);
> > > + if (!err && !xe_vma_has_no_bo(vma))
> > > + err = xe_bo_migrate(xe_vma_bo(vma),
> > > + region_to_mem_type[region]);
> > > break;
> > > + }
> > > default:
> > > drm_warn(&vm->xe->drm, "NOT POSSIBLE");
> > > }
> > > --
> > > 2.34.1
> >
More information about the Intel-xe
mailing list