[Intel-gfx] [RFC v2 02/12] drm/i915/svm: Runtime (RT) allocator support

Niranjana Vishwanathapura niranjana.vishwanathapura at intel.com
Wed Dec 18 23:25:57 UTC 2019


On Tue, Dec 17, 2019 at 12:01:26PM -0600, Jason Ekstrand wrote:
>   On Sun, Dec 15, 2019 at 10:24 PM Niranjan Vishwanathapura
>   <niranjana.vishwanathapura at intel.com> wrote:
>
>     On Sat, Dec 14, 2019 at 10:31:37AM +0000, Chris Wilson wrote:
>     >Quoting Jason Ekstrand (2019-12-14 00:36:19)
>     >> On Fri, Dec 13, 2019 at 5:24 PM Niranjan Vishwanathapura <
>     >> niranjana.vishwanathapura at intel.com> wrote:
>     >>
>     >>     On Fri, Dec 13, 2019 at 04:58:42PM -0600, Jason Ekstrand wrote:
>     >>     >
>     >>     >     +/**
>     >>     >     + * struct drm_i915_gem_vm_bind
>     >>     >     + *
>     >>     >     + * Bind an object in a vm's page table.
>     >>     >
>     >>     >   First off, this is something I've wanted for a while for
>     Vulkan, it's
>     >>     just
>     >>     >   never made its way high enough up the priority list. 
>     However, it's
>     >>     going
>     >>     >   to have to come one way or another soon.  I'm glad to see
>     kernel API
>     >>     for
>     >>     >   this being proposed.
>     >>     >   I do, however, have a few high-level comments/questions about
>     the API:
>     >>     >    1. In order to be useful for sparse memory support, the API
>     has to go
>     >>     the
>     >>     >   other way around so that it binds a VA range to a range
>     within the BO. 
>     >>     It
>     >>     >   also needs to be able to handle overlapping where two
>     different VA
>     >>     ranges
>     >>     >   may map to the same underlying bytes in the BO.  This likely
>     means that
>     >>     >   unbind needs to also take a VA range and only unbind that
>     range.
>     >>     >    2. If this is going to be useful for managing GL's address
>     space where
>     >>     we
>     >>     >   have lots of BOs, we probably want it to take a list of
>     ranges so we
>     >>     >   aren't making one ioctl for each thing we want to bind.
>     >>
>     >>     Hi Jason,
>     >>
>     >>     Yah, some of these requirements came up.
>     >>
>     >>  
>     >> Yes, I have raised them every single time an API like this has come
>     across my
>     >> e-mail inbox for years and they continue to get ignored.  Why are we
>     landing an
>     >> API that we know isn't the API we want especially when it's pretty
>     obvious
>     >> roughly what the API we want is?  It may be less time in the short
>     term, but
>     >> long-term it means two ioctls and two implementations in i915, IGT
>     tests for
>     >> both code paths, and code in all UMDs to call one or the other
>     depending on
>     >> what kernel you're running on, and we have to maintain all that code
>     going
>     >> forward forever.  Sure, that's a price we pay today for a variety of
>     things but
>     >> that's because they all seemed like the right thing at the time. 
>     Landing the
>     >> wrong API when we know it's the wrong API seems foolish.
>     >
>     >Exactly. This is not even close to the uAPI we need. Reposting an RFC
>     >without taking in the concerns last time (or the time before that, or
>     >the time before that...) suggests that you aren't really requesting for
>     >comments at all.
>
>     Thanks Jason for detailed exlanation.
>     Chris, all comments and guidance are much appreciated :)
>
>     I haven't looked in detail, but my concern is that implementing
>     partial object binding (offset, lenght) from vma down to [un]binding
>     in ppgtt might be a lot of work to include in this SVM patch series.
>     I believe we need the partial object binding in non-SVM scenario
>     as well?
>
>   Yes, the Vulkan APIs require both partial binding and aliasing.
>   It may be worth pointing out that we're already doing some of this stuff
>   today, although in a rather backwards way.  Our non-softpin model for
>   Vulkan uses a memfd which we then map in userspace and turn into a BO via
>   userptr.  Due to the way we handle threading in the driver, we end up with
>   multiple BOs pointing to the same overlapping range in the memfd and hence
>   the same pages.  That doesn't mean that everything in the path is already
>   set up for what you need but the VA -> page mappings should be.  Also,
>   avoiding these kinds of shinanigans is exactly why we want a "real" kernel
>   API for this. :-)
>    

Ok thanks Jason for the explantion.
Will look into supporting this here.

>
>     Ok, let me change the interface as below.
>
>     struct drm_i915_gem_vm_bind_va
>     {
>             /** VA start to bind **/
>             __u64 start;
>
>             /** Offset in Object to bind for I915_GEM_VM_BIND_SVM_OBJ type
>     **/
>             __u64 offset;
>
>             /** VA length to [un]bind **/
>             __u64 length;
>
>             /** Type of memory to [un]bind **/
>             __u32 type;
>     #define I915_GEM_VM_BIND_SVM_OBJ      0
>     #define I915_GEM_VM_BIND_SVM_BUFFER   1
>
>             /** Object handle to [un]bind for I915_GEM_VM_BIND_SVM_OBJ type
>     **/
>             __u32 handle;
>
>             /** Flags **/
>             __u32 flags;
>     #define I915_GEM_VM_BIND_UNBIND      (1 << 0)
>     #define I915_GEM_VM_BIND_READONLY    (1 << 1)
>     }
>
>     struct drm_i915_gem_vm_bind {
>             /** vm to [un]bind **/
>             __u32 vm_id;
>
>             /** number of VAs to bind **/
>             __u32 num_vas;
>
>             /** Array of VAs to bind **/
>             struct drm_i915_gem_vm_bind_va *bind_vas;
>
>             /** User extensions **/
>             __u64 extensions;
>     };
>
>     When synchronization control is added as extension, it applies to all
>     VAs in the array.
>     Does this looks good?
>
>   Yes, I think that header looks good.  It's probably fine if
>   synchronization comes later.
>   I have two more questions (more for my own education than anything):
>    1. What is the difference between a SVM object and a buffer?

SVM object is the GEM BO. By buffer I mean system allocated memory (say malloc()).
I have some documentation in patch 01.
https://lists.freedesktop.org/archives/intel-gfx/2019-December/223481.html

>    2. I see a vm_id but there is no context.  What (if any) are the
>   synchronization guarantees between the VM_BIND ioctl and EXECBUF?  If I
>   VM_BIND followed by EXECBUF is it guaranteed to happen in that order? 
>   What if I EXECBUF and then VM_BIND to unbind something?  If I VM_BIND
>   while an execbuf is happening but I have some way of delaying the GPU work
>   from the CPU and I unblock it once the VM_BIND comes back, is that ok?
>   If those questions are answered by other patches, feel free to just point
>   me at them instead of answering in detail here.

Binding/unbinding is w.r.t to a device page table (hence the vm_id).
With current implementation, Yes, EXECBUF after the VM_BIND will see those binds
and ensure that those VA ranges are bound in device page table.
VM_BIND to bind/unbind after issuing EXECBUF which is alredy running (eg., endless
batch buffer), is not currently supported by this patch. But yes, I expect your
above scenario should be allowed and supported eventually.
I agree we need to set right expectation here for some of the use cases.
Let me look into this synchronization b/w two paths a bit more and sync with you.

Thanks,
Niranjana

>   --Jason


More information about the Intel-gfx mailing list