[PATCH v4] Documentation/gpu: VM_BIND locking document

Boris Brezillon boris.brezillon at collabora.com
Thu Nov 16 13:27:07 UTC 2023


On Thu, 16 Nov 2023 12:48:45 +0100
Thomas Hellström <thomas.hellstrom at linux.intel.com> wrote:

> Hi, Boris,
> 
> Thanks for reviewing. Some comments below:
> 
> On Thu, 2023-11-16 at 10:48 +0100, Boris Brezillon wrote:
> > Hi Thomas,
> > 
> > On Wed, 15 Nov 2023 13:49:37 +0100
> > Thomas Hellström <thomas.hellstrom at linux.intel.com> wrote:
> >   
> > > Add the first version of the VM_BIND locking document which is
> > > intended to be part of the xe driver upstreaming agreement.
> > > 
> > > The document describes and discuss the locking used during exec-
> > > functions, evicton and for userptr gpu-vmas. Intention is to be
> > > using the
> > > same nomenclature as the drm-vm-bind-async.rst.
> > > 
> > > v2:
> > > - s/gvm/gpu_vm/g (Rodrigo Vivi)
> > > - Clarify the userptr seqlock with a pointer to mm/mmu_notifier.c
> > >   (Rodrigo Vivi)
> > > - Adjust commit message accordingly.
> > > - Add SPDX license header.
> > > 
> > > v3:
> > > - Large update to align with the drm_gpuvm manager locking
> > > - Add "Efficient userptr gpu_vma exec function iteration" section
> > > - Add "Locking at bind- and unbind time" section.
> > > 
> > > v4:
> > > - Fix tabs vs space errors by untabifying (Rodrigo Vivi)
> > > - Minor style fixes and typos (Rodrigo Vivi)
> > > - Clarify situations where stale GPU mappings are occurring and how
> > >   access through these mappings are blocked. (Rodrigo Vivi)
> > > - Insert into the toctree in implementation_guidelines.rst
> > > 
> > > Cc: Rodrigo Vivi <rodrigo.vivi at intel.com>
> > > Signed-off-by: Thomas Hellström <thomas.hellstrom at linux.intel.com>
> > > ---
> > >  Documentation/gpu/drm-vm-bind-locking.rst     | 503
> > > ++++++++++++++++++
> > >  .../gpu/implementation_guidelines.rst         |   1 +
> > >  2 files changed, 504 insertions(+)
> > >  create mode 100644 Documentation/gpu/drm-vm-bind-locking.rst
> > > 
> > > diff --git a/Documentation/gpu/drm-vm-bind-locking.rst
> > > b/Documentation/gpu/drm-vm-bind-locking.rst
> > > new file mode 100644
> > > index 000000000000..bc701157cb34
> > > --- /dev/null
> > > +++ b/Documentation/gpu/drm-vm-bind-locking.rst
> > > @@ -0,0 +1,503 @@
> > > +.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
> > > +
> > > +===============
> > > +VM_BIND locking
> > > +===============
> > > +
> > > +This document attempts to describe what's needed to get VM_BIND
> > > locking right,
> > > +including the userptr mmu_notifier locking and it will also
> > > discuss some
> > > +optimizations to get rid of the looping through of all userptr
> > > mappings and
> > > +external / shared object mappings that is needed in the simplest
> > > +implementation. It will also discuss some implications for
> > > faulting gpu_vms.
> > > +
> > > +Nomenclature
> > > +============
> > > +
> > > +* ``Context``: GPU execution context.
> > > +* ``gpu_vm``: Abstraction of a virtual GPU address space with
> > > +  meta-data. Typically one per client (DRM file-private), or one
> > > per
> > > +  context.  
> > 
> > Should we mention that it's a driver object, likely inheriting from
> > drm_gpuvm?  
> 
> Yes, I can try to be a bit more drm_gpuvm-centric throughout the
> document, although I want to avoid being too specific due to the
> current rapid drm_gpuvm rate of change, which might also affect this
> document. I guess I will have to commit for now to update the document
> on each gpuvm series we land...

Well, I'd suggest that the one doing changes to drm_gpuvm gets to
update this document along the way, or even better, make this
documentation part of the drm_gpuvm doc, so there's no excuse to not
update it when drm_gpuvm is extended.

> >   
> > > +* ``exec function``: An exec function is a function that
> > > revalidates all
> > > +  affected gpu_vmas, submits a GPU command batch and registers the
> > > +  dma_fence representing the GPU command's activity with all
> > > affected
> > > +  dma_resvs. For completeness, although not covered by this
> > > document,
> > > +  it's worth mentioning that an exec function may also be the
> > > +  revalidation worker that is used by some drivers in compute /
> > > +  long-running mode.
> > > +* ``local object``: A GEM object which is local to a gpu_vm.
> > > Shared gem
> > > +  objects also share the gpu_vm's dma_resv.
> > > +* ``shared object``: a.k.a external object: A GEM object which may
> > > be shared
> > > +  by multiple gpu_vms and whose backing storage may be shared with
> > > +  other drivers.  
> > 
> > Should we name that one external object and mentions it's sometimes
> > also called external object. This way it matches the name used in
> > gpuvm
> > implementation (extobj).  
> 
> You mean "... also called shared object"?

Yeah, sorry.

> Shure, can do that.
> >   
> > > +
> > > +
> > > +Locks used and locking orders
> > > +=============================
> > > +
> > > +One of the benefits of VM_BIND is that local GEM objects share the
> > > gpu_vm's
> > > +dma_resv object and hence the dma_resv lock. So even with a huge
> > > +number of local GEM objects, only one lock is needed to make the
> > > exec
> > > +sequence atomic.
> > > +
> > > +The following locks and locking orders are used:
> > > +
> > > +* The ``gpu_vm->lock`` (optionally an rwsem). Protects how the
> > > gpu_vm is
> > > +  partitioned into gpu_vmas. It can also protect the gpu_vm's list
> > > of
> > > +  userptr gpu_vmas. With a CPU mm analogy this would correspond to
> > > the
> > > +  mmap_lock.  
> > 
> > I don't see any drm_gpuvm::lock field in Danilo's latest patchset,
> > so,
> > unless I missed one version, and this lock is actually provided by
> > drm_gpuvm, I would mention this is a driver-specific lock. This
> > comment
> > applies to all the locks you describe here actually (mention which
> > ones
> > are provided by drm_gpuvm, and which ones are driver-specific).  
> 
> These will be needed also by gpuvm when implementing userptr vmas, so I
> can mention that drm_gpuvm is currently lacking a userptr
> implementation, so "the locks described below are to be considered
> driver-specific for now"

Sounds good.

> > > +
> > > +The reason for having a separate gpu_vm rebind list is that there
> > > +might be userptr gpu_vmas that are not mapping a buffer object
> > > that
> > > +also need rebinding.
> > > +
> > > +Eviction
> > > +________
> > > +
> > > +Eviction of one of these local objects will then look similar to
> > > the
> > > +following:
> > > +
> > > +.. code-block:: C
> > > +
> > > +   obj = get_object_from_lru();
> > > +
> > > +   dma_resv_lock(obj->resv);
> > > +   for_each_gpu_vm_bo_of_obj(obj, &gpu_vm_bo);
> > > +           add_gpu_vm_bo_to_evict_list(&gpu_vm_bo, &gpu_vm-  
> > > >evict_list);  
> > > +
> > > +   add_dependencies(&eviction_job, &obj->resv);
> > > +   job_dma_fence = gpu_submit(&eviction_job);  
> > 
> > Could you expend a bit on what the eviction job does? On embedded
> > GPUs,
> > where there's no VRAM, we typically don't have a GPU job to teardown
> > GPU mappings. We do have an asynchronous VM_BIND queue though, so I
> > suspect the job here is an async VM_BIND job.
> > 
> > Not asking you to add that to the doc, I'm just trying to figure out
> > how
> > this would map to the mem-reclaim logic in panthor...  
> 
> This would typically be the blit of data from VRAM to system,

Okay, not needed in my case then.

> but async
> jobs here may also include zapping page-tables and TLB flush in case of
> recoverable page-faults.

And that part is synchronous in panthor. Maybe the locking will force
me to make it an asynchronous VM_BIND job. I guess I'll only know when
I get to hook up the shrinker...

> 
> >   
> > > +   add_dma_fence(&obj->resv, job_dma_fence);
> > > +
> > > +   dma_resv_unlock(&obj->resv);
> > > +   put_object(obj);
> > > +
> > > +Note that since the object is local to the gpu_vm, it will share
> > > the gpu_vm's
> > > +dma_resv lock so that ``obj->resv == gpu_vm->resv``.
> > > +The gpu_vm_bos marked for eviction are put on the gpu_vm's evict
> > > list,
> > > +which is protected by ``gpu_vm->resv``, that is always locked
> > > while
> > > +evicting, due to the above equality.
> > > +
> > > +For VM_BIND gpu_vms, gpu_vmas don't need to be unbound before
> > > eviction,
> > > +Since the driver must ensure that the eviction blit or copy will
> > > wait
> > > +for GPU idle or depend on all previous GPU activity. Furthermore,
> > > any
> > > +subsequent attempt by the GPU to access freed memory through the
> > > +gpu_vma will be preceded by a new exec function, with a
> > > revalidation
> > > +section which will make sure all gpu_vmas are rebound. The
> > > eviction
> > > +code holding the object's dma_resv while revalidating will ensure
> > > a
> > > +new exec function may not race with the eviction. Note that this
> > > will
> > > +not hold true, however, if only a subsets of vmas are, due to the
> > > +driver implementation, selected for rebinding the next exec
> > > +function.  
> > 
> > This last sentence is hard to follow.
> > 
> > "
> > Note that this will not hold true if only a subset of vmas
> > are selected for rebinding during the next exec call (for instance,
> > due
> > to some driver decision to only partially restore VMAs).
> > "
> >   
> > > Then all vmas *not* selected for rebinding needs to be
> > > +properly unbound before re-enabling GPU access to the VM.  
> > 
> > I think I get the problem, but can we have a use case where partial
> > VMA restoration is useful? I mean, if some VMAs are not needed, we
> > might be able to save MMU page table allocation/setup-time, but given
> > the mess it then is to track those non-live VMAs, I'm wondering if
> > it's
> > leaving the door open for that, unless there's a good reason to do
> > it.  
> 
> This is the use-case Christian has been flagging for for OpenGL and
> Media where he warns that the single-vm-memory-overcommit case would
> otherwise make the app crawl.

IIUC, the partial VMA restore is about not restoring all VMAs attached
to a vm_bo, but as soon as you restore one, this makes the BO resident,
so all you're saving here is the extra page table for non-restored VMAs.
I don't think that would significantly help the overcommit use case,
unless you have so many VMAs attached to a single vm_bo that the
amount of extra page tables becomes non-negligible compared to the BO
size itself.

What would really help the overcommit use case is not restoring all
evicted BOs, if some of them are known to be in a range that's not
accessed by a GPU job. In that situation, you can decide to leave
vm_bos in the evicted list if none of their VMAs overlap any of the VM
ranges used by a job.

> 
> Generalized one might want to think of these as groups of (or perhaps
> virtual ranges of) vmas that need to be resident for a single job
> submission. Currently we assume the residency-group <-> vm mapping is
> 1:1, allowing for the unbind-before-eviction to be ignored, but I
> figure moving forward and addressing performance problems of real-world
> applications that may not always be true.

Maybe I don't understand what unbind-before-eviction means. To me it's
the operation of tearing down all VM mappings (unbind) before returning
BO pages to the system (evict). I don't see a situation where the
eviction of a BO doesn't imply killing all VM mapping pointing to this
BO.

> 
> >   
> > > +
> > > +
> > > +Locking with external (or shared) buffer objects
> > > +================================================
> > > +
> > > +Since shared buffer objects may be shared by multiple gpu_vm's
> > > they
> > > +can't share their reservation object with a single gpu_vm, but
> > > will rather
> > > +have a reservation object of their own. The shared objects bound
> > > to a
> > > +gpu_vm using one or many gpu_vmas are therefore typically put on a
> > > +per-gpu_vm list which is protected by the gpu_vm's dma_resv lock.
> > > Once
> > > +the gpu_vm's reservation object  is locked, it is safe to traverse
> > > the  
> > 
> >                                   ^ extra space
> >   
> > > +external object list and lock the dma_resvs of all external
> > > objects.
> > > +
> > > +At eviction time we now need to put the gpu_vm_bos of *all*
> > > gpu_vms a
> > > +shared object is bound to on the gpu_vm's evict list, but we can
> > > no longer
> > > +be certain that we hold the gpu_vm's dma_resv of all the gpu_vms
> > > the
> > > +object is bound to, since at eviction time we only hold the
> > > object's
> > > +private dma_resv. If we have a ww_acquire context at hand at
> > > eviction
> > > +time we could grab the those dma_resvs but that could cause
> > > +expensive ww_mutex rollbacks. A simple option is to just mark the
> > > +gpu_vm_bos of the evicted gem object with an ``evicted`` bool that
> > > +is inspected the next time the corresponding gpu_vm evicted list
> > > needs
> > > +to be traversed.  
> > 
> > IIUC, the vm_bo is not added to the evicted list immediately in that
> > case, so I suspect you meant the next time the corresponding gpu_vm
> > *extobj* list needs to be traversed.  
> 
> I like to think of the concept as "lock and validate the list before
> traversing it" and you're right in this case, the validation occurs
> slightly before we actually start looking at the evicted list. But I
> could perhaps rephrase to "...bool that is inspected *before* the next
> time..."

My point was, you detect such evicted vm_bos when iterating the
extobj list, not the evicted list, because the vm_bo won't be in the
evicted list until you've walked extobj and inserted evicted vm_bos the
the evicted list. Even with your rephrasing, it sounds like those are
detected by iterating the evicted list. How about

"
A simple option is to just mark the gpu_vm_bos of the evicted gem
object with an ``evicted`` bool. Those gpu_vm_bos will be moved to
the evicted list next time the extobj list is inspected for vm_bo
revalidation.
"


More information about the dri-devel mailing list