[Mesa-dev] [PATCH v3 8/8] anv: Do relocations in userspace before execbuf ioctl
Jason Ekstrand
jason at jlekstrand.net
Thu Nov 3 21:19:01 UTC 2016
On Thu, Nov 3, 2016 at 1:53 AM, Chris Wilson <chris at chris-wilson.co.uk>
wrote:
> On Wed, Nov 02, 2016 at 01:58:43PM -0700, Jason Ekstrand wrote:
> > +/**
> > + * This function applies the relocation for a command buffer and writes
> the
> > + * actual addresses into the buffers as per what we were told by the
> kernel on
> > + * the previous execbuf2 call. This should be safe to do because, for
> each
> > + * relocated address, we have two cases:
> > + *
> > + * 1) The target BO is inactive (as seen by the kernel). In this
> case, it is
> > + * not in use by the GPU so updating the address is 100% ok. It
> won't be
> > + * in-use by the GPU (from our context) again until the next
> execbuf2
> > + * happens. If the kernel decides to move it in the next execbuf2,
> it
> > + * will have to do the relocations itself, but that's ok because it
> should
> > + * have all of the information needed to do so.
> > + *
> > + * 2) The target BO is active (as seen by the kernel). In this case,
> it
> > + * hasn't moved since the last execbuffer2 call because GTT
> shuffling
> > + * *only* happens inside the execbuffer2 ioctl. Since the target BO
> > + * hasn't moved, our anv_bo::offset exactly matches the BO's GTT
> address
> > + * and the relocated value we are writing into the BO will be the
> same as
> > + * the value that is already there.
>
> This is slightly misleading.
>
> "In this case, it hasn't moved since the last execbuffer2 call because
> GTT shuffling *only* happens when the BO is idle. (From our perspective,
> it only happens inside the execbuffer2 ioctl, but the shuffling may be
> triggered by another ioctl, with full-ppgtt this is limited to only
> execbuffer2 ioctls on the same context, or memory pressure.)"
>
Right. Without full ppgtt, things may get evicted because of another
process but that means that a) the buffer isn't busy at that time and b)
the kernel will know it moved something around and do the relocations
anyway.
> So now, the slightly more opened ended question...
>
> Addresses are local to a context. From my understanding the
> VkQueueFamily corresponds to the render engine, or media engine, or
> blitter (or maybe compute on top of render?)
Yes, that's correct.
> and the application may
> create multiple VkQueues in each family. (Aiui, at the moment this is
> limited to 1 queue.)
>
Yes, we do limit it to 1 queue and will continue to do so until preeption
gets sorted out. If we support multiple queues they have to be able to
truly run in parallel with VkSemaphore dependencies between them and
possibly going both ways. We can't support a blit queue because the
blitter isn't capable of handling things like W-tiling which would need to
be supported on a DMA queue.
> The reloc tables must be local to each queue (since the addresses
> contained within only match the local context). What I am more worried
> about is whether state may be shared between contexts, and whether
> active state have the same target address in different contexts (i.e.
> the requirement above that you only need to relocate idle state). Whilst
> you only have one context, it is a moot point - I'd just like to see
> some acknowledgement that surfaces may have different addresses in
> different contexts and how you want to handle that.
>
Once we get into the world of multiple queues, I think we'll be using
softpin or SVM and relocations will be a thing of the past. Thanks for the
heads up though!
> > + *
> > + * There is also a possibility that the target BO is active but the
> exact
> > + * RENDER_SURFACE_STATE object we are writing the relocation into
> isn't in
> > + * use. In this case, the address currently in the
> RENDER_SURFACE_STATE
> > + * may be stale but it's still safe to write the relocation because
> that
> > + * particular RENDER_SURFACE_STATE object isn't in-use by the GPU
> and
> > + * won't be until the next execbuf2 call.
> > + *
> > + * By doing relocations on the CPU, we can tell the kernel that it
> doesn't
> > + * need to bother. We want to do this because the surface state buffer
> is
> > + * used by every command buffer so, if the kernel does the relocations,
> it
> > + * will always be busy and the kernel will always stall. This is also
> > + * probably the fastest mechanism for doing relocations since the
> kernel would
> > + * have to make a full copy of all the relocations lists.
> > + */
> > +static bool
> > +relocate_cmd_buffer(struct anv_cmd_buffer *cmd_buffer)
> > +{
> > + static int userspace_relocs = -1;
> > + if (userspace_relocs < 0)
> > + userspace_relocs = env_var_as_boolean("ANV_USERSPACE_RELOCS",
> true);
> > + if (!userspace_relocs)
> > + return false;
> > +
> > + /* First, we have to check to see whether or not we can even do the
> > + * relocation. New buffers which have never been submitted to the
> kernel
> > + * don't have a valid offset so we need to let the kernel do
> relocations so
> > + * that we can get offsets for them. On future execbuf2 calls, those
> > + * buffers will have offsets and we will be able to skip relocating.
> > + * Invalid offsets are indicated by anv_bo::offset == (uint64_t)-1.
> > + */
>
> Something to consider is just randomly assigning an address and using
> it. The kernel will relocate if it wants to, but in future the kernel
> will use your address as a hint and try to bind the object there (if
> that space is available). Like softpinning, but without failing with
> -EINVAL if the space is not available. Or you may of course do full
> client side address allocation under full-ppgtt (the hybrid scheme is
> still useful elsewhere). The advantage is avoiding the stall on first
> state use. The disadvantage is that until that patch lands, you walk the
> relocations for no gain (otoh, since the kernel will take its own sweet
> time doing the same, the inefficiency here will hopefully be negligible.)
>
Does that really work? My reading of the kernel sources indicates that
NO_RELOC means the kernel will just assume that they match what it gave you
in the last execbuf2. It never checks that the offsets match unless it had
to move something in the GTT. If, on the other hand, you mark the object
as PINNED and the offsets don't match, it will get flagged in
eb_vma_misplaced, remapped, and the relocations *will* trigger.
> > + bool offsets_changed = false;
> > + for (uint32_t i = 0; i < cmd_buffer->execbuf2.bo_count; i++) {
> > + struct anv_bo *bo = cmd_buffer->execbuf2.bos[i];
> > +
> > + if (bo->offset == (uint64_t)-1)
> > + return false;
> > +
> > + if (cmd_buffer->execbuf2.objects[i].offset != bo->offset)
> > + offsets_changed = true;
> > + }
> > +
> > + /* Since surface states are shared between command buffers and we
> don't
> > + * know what order they will be submitted to the kernel, we don't
> know
> > + * what address is actually written in the surface state object at
> any
> > + * given time. The only option is to always relocate them.
> > + */
> > + anv_reloc_list_apply(&cmd_buffer->surface_relocs,
> > + cmd_buffer->device,
> > + &cmd_buffer->device->surface_
> state_block_pool.bo,
> > + true /* always relocate surface states */);
>
> Worth keeping an atomic_t surface_state_reloc_serial ? (later!)
>
Possibly... But my gut tells me that the case where we can actually re-use
the addresses in the surface state buffer is fairly rare.
> > + if (offsets_changed) {
> > + /* Since we own all of the batch buffers, we know what values are
> stored
> > + * in the relocated addresses and only have to update them if the
> > + * offsets have changed.
> > + */
> > + struct anv_batch_bo **bbo;
> > + u_vector_foreach(bbo, &cmd_buffer->seen_bbos) {
> > + anv_reloc_list_apply(&(*bbo)->relocs,
> > + cmd_buffer->device, &(*bbo)->bo, false);
> > + }
> > + }
> > +
> > + for (uint32_t i = 0; i < cmd_buffer->execbuf2.bo_count; i++) {
> > + struct anv_bo *bo = cmd_buffer->execbuf2.bos[i];
> > +
> > + cmd_buffer->execbuf2.objects[i].offset = bo->offset;
> > + }
> > +
> > + return true;
> > +}
> > +
> > VkResult
> > anv_cmd_buffer_execbuf(struct anv_device *device,
> > struct anv_cmd_buffer *cmd_buffer)
> > {
> > - /* Since surface states are shared between command buffers and we
> don't
> > - * know what order they will be submitted to the kernel, we don't
> know what
> > - * address is actually written in the surface state object at any
> given
> > - * time. The only option is to set a bogus presumed offset and let
> > - * relocations do their job.
> > - */
> > - for (size_t i = 0; i < cmd_buffer->surface_relocs.num_relocs; i++)
> > - cmd_buffer->surface_relocs.relocs[i].presumed_offset = -1;
> > + /* Make a copy of the execbuffer2 so we can alter the flags field */
> > + struct drm_i915_gem_execbuffer2 execbuf =
> cmd_buffer->execbuf2.execbuf;
> > +
> > + if (relocate_cmd_buffer(cmd_buffer)) {
> > + /* If we were able to successfully relocate everything, tell the
> kernel
> > + * that it can skip doing relocations.
> > + */
>
> /* The requirement for using NO_RELOC is:
> *
> * The addresses written in the objects must match the corresponding
> * reloc.presumed_offset which in turn must match the corresponding
> * execobject.offset.
> *
> * To avoid stalling, execobject.offset should match the current
> * address of that object within the active context.
>
Actually, from my reading of the i915 sources, the kernel doesn't check
this. It's your responsibility to ensure that they match the addresses
returned by the previous execbuf2 ioctl.
> *
> * If we were able to successfully relocate everything, tell the kernel
> * that it can skip doing relocations.
> */
>
I copy+paste your comment (with my correction).
> > + execbuf.flags |= I915_EXEC_NO_RELOC;
> > + } else {
> > + /* In the case where we fall back to doing kernel relocations, we
> need
> > + * to ensure that the relocation list is valid. All relocations
> on the
> > + * batch buffers are already valid and kept up-to-date. Since
> surface
> > + * states are shared between command buffers and we don't know
> what
> > + * order they will be submitted to the kernel, we don't know what
> > + * address is actually written in the surface state object at any
> given
> > + * time. The only option is to set a bogus presumed offset and
> let the
> > + * kernel relocate them.
> > + */
> > + for (size_t i = 0; i < cmd_buffer->surface_relocs.num_relocs;
> i++)
> > + cmd_buffer->surface_relocs.relocs[i].presumed_offset = -1;
> > + }
> >
> > - return anv_device_execbuf(device, &cmd_buffer->execbuf2.execbuf,
> > - cmd_buffer->execbuf2.bos);
> > + return anv_device_execbuf(device, &execbuf,
> cmd_buffer->execbuf2.bos);
> > }
> > diff --git a/src/intel/vulkan/anv_device.c
> b/src/intel/vulkan/anv_device.c
> > index c40598c..a650212 100644
> > --- a/src/intel/vulkan/anv_device.c
> > +++ b/src/intel/vulkan/anv_device.c
> > @@ -1097,6 +1097,8 @@ VkResult anv_QueueSubmit(
> > struct anv_device *device = queue->device;
> > VkResult result = VK_SUCCESS;
>
> /* Take a lock on all shared surface_state in order to perform
> relocations. */
> > + pthread_mutex_lock(&device->mutex);
>
> I think locking inside Vulkan is so rare, that it is worth explaining :)
>
Fair enough.
> -Chris
>
> --
> Chris Wilson, Intel Open Source Technology Centre
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/mesa-dev/attachments/20161103/6583409f/attachment-0001.html>
More information about the mesa-dev
mailing list