[Intel-xe] [PATCH] drm/xe: Extend drm_xe_vm_bind_op
Joonas Lahtinen
joonas.lahtinen at linux.intel.com
Fri Sep 8 08:17:34 UTC 2023
(+ Thomas who this was discussed with and Faith for original reasoning for op[])
Top-posting to discuss the alternative of having a single VM_BIND per
operation instead of array of operations.
Thomas is working on the feature of returning an error from the middle
of the op[] array to the userspace and unrolling the operations plus -EINTR
handling became under discussion.
With the addition of debug metadata per op (aka. per vma) the unrolling and
-EINTR handling will become further complex.
Would there be a measurable performance issue if we would simplify and
just to do VM_BIND(op)[] instead of VM_BIND(op[])?
Unrolling would be much easier and there would be no need for complex error
returning scheme. The GPUVA operation may still split into maximum 3 separate
ops but the rest of the unrolling like metadata would need to be only
dealt with once.
Regards, Joonas
Quoting Rodrigo Vivi (2023-09-08 00:45:53)
> On Thu, Sep 07, 2023 at 04:51:21PM +0300, Mika Kuoppala wrote:
> > Rodrigo Vivi <rodrigo.vivi at intel.com> writes:
> >
> > > On Mon, Sep 04, 2023 at 05:46:44PM +0300, Mika Kuoppala wrote:
> > >> The bind api is extensible but for a single bind op, there
> > >> is not a mechanism to extend. Add extensions field to
> > >> struct drm_xe_vm_bind_op.
> > >
> > > But why would you want to extend the operation?
> > > Except for the destroy ones, every ioctl itself is extensible.
> > >
> > > So, DRM_IOCTL_XE_VM_BIND is extensible. Why would we need to get
> > > prepared to extend the operations themselves? And if we extend
> > > the operation, what to do with the extension at the ioctl level?
> > > which one has precedence? how to organize that?
> > >
> >
> > The intent is to pass debugger metadata as part of particular
> > vm bind operation. For example on MAP, we could associate
> > ELF/ISA (relevant parts) as metadata for this bind range.
> >
> > So in vector of binds, we want to tag specific one map (in between)
> > with debugger metadata.
> >
> > With extending the XE_VM_BIND itself, this could be possible too
> > but would then need to deliver index into the vector instead
> > of carrying the metadata as part of per operation.
> > As atleast in this example, the extension is heavily tied
> > into particular OP (map).
> >
> > I take that you mean precedence of VM_BIND vs bind op?
> > Excellent question and I dont know all the use cases the
> > vm_bind have to cater. So I can only refer to example above,
> > VM_BIND extensions would be only in scope to all operations
> > and vm_bind_op extensions would be tightly coupled to per
> > operation only.
>
> So, maybe we should do the same union that we do with the
> ops themselves and make the extension also an array of num_binds ?
>
> >
> > Thanks for feedback!
> > -Mika
> >
> > >>
> > >> Cc: Rodrigo Vivi <rodrigo.vivi at intel.com>
> > >> Cc: Matthew Brost <matthew.brost at intel.com>
> > >> Cc: Lucas De Marchi <lucas.demarchi at intel.com>
> > >> Cc: Francois Dugast <francois.dugast at intel.com>
> > >> Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> > >> Cc: Dominik Grzegorzek <dominik.grzegorzek at intel.com>
> > >> Signed-off-by: Mika Kuoppala <mika.kuoppala at linux.intel.com>
> > >> ---
> > >> include/uapi/drm/xe_drm.h | 3 +++
> > >> 1 file changed, 3 insertions(+)
> > >>
> > >> diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
> > >> index 86f16d50e9cc..5c6c86f5e5fc 100644
> > >> --- a/include/uapi/drm/xe_drm.h
> > >> +++ b/include/uapi/drm/xe_drm.h
> > >> @@ -552,6 +552,9 @@ struct drm_xe_vm_destroy {
> > >> };
> > >>
> > >> struct drm_xe_vm_bind_op {
> > >> + /** @extensions: Pointer to the first extension struct, if any */
> > >> + __u64 extensions;
> > >> +
> > >> /**
> > >> * @obj: GEM object to operate on, MBZ for MAP_USERPTR, MBZ for UNMAP
> > >> */
> > >> --
> > >> 2.34.1
> > >>
More information about the Intel-xe
mailing list