[PATCH drm-next v5 03/14] drm: manager to keep track of GPUs VA mappings

Dave Airlie airlied at gmail.com
Mon Jun 26 22:38:18 UTC 2023


> > As pointed out by Christian, this would optimize the "get all mappings
> > backed by a specific BO from a given VM" use case.
> >
> > The question for me is, do other drivers than amdgpu commonly need this?
>
> I have no idea.
>
> >
> > And what does amdgpu need this for? Maybe amdgpu does something we're
> > not doing (yet)?
>
> Basically when we do a CS we need to make sure that the VM used by this
> CS is up to date. For this we walk over the relocation list of BOs and
> check the status of each BO+VM structure.
>
> This is done because we don't want to update all VMs at the same time,
> but rather just those who needs the update.

This seems like a legacy from GL and possibly older vulkan, going
forward vulkan can't rely on passing lists of objects into the kernel
due to things like buffer_device_address, I'm not sure we should
optimise for this situation, and moving the tracking list into the
drivers is going to mean having a bunch of drivers all having the same
boilerplate, to do the same thing just so amdgpu can't avoid doing it.

Now there might be some benchmark somewhere we can produce a benefit
in this, and if there is then we should consider going this way for
all drivers and not just allowing drivers to choose their own path
here.

> >
> > Christian - I know you didn't ask for "do it the way amdgpu does",
> > instead you voted for keeping it entirely driver specific. But I think
> > everyone is pretty close and I'm still optimistic that we could just
> > generalize this.
>
> Well, you should *not* necessarily do it like amdgpu does! Basically the
> implementation in amdgpu was driven by requirements, e.g. we need that,
> let's do it like this.
>
> It's perfectly possible that other requirements (e.g. focus on Vulkan)
> lead to a completely different implementation.
>
> It's just that ideally I would like to have an implementation where I
> can apply at least the basics to amdgpu as well.
>

I think we can still do that just either have an option to disable
using the list internally in the gpuva or have the driver keep it's
own tracking alongside, there may still be use cases where it can use
the gpuva tracking instead of it's own.

I don't think we should forklift what is pretty likely to be common
code across every driver that uses this going forward just to benefit
an amdgpu design decision for older stacks.

Dave.


More information about the dri-devel mailing list