[Intel-gfx] [PATCH 14/22] drm/i915/gem: Async GPU relocations only
Chris Wilson
chris at chris-wilson.co.uk
Thu Jun 4 13:44:23 UTC 2020
Quoting Matthew Auld (2020-06-04 14:37:40)
> On Thu, 4 Jun 2020 at 11:38, Chris Wilson <chris at chris-wilson.co.uk> wrote:
> >
> > Reduce the 3 relocation patches down to the single path that accommodates
> > all. The primary motivation for this is to guard the relocations with a
> > natural fence (derived from the i915_request used to write the
> > relocation from the GPU).
> >
> > The tradeoff in using async gpu relocations is that it increases latency
> > over using direct CPU relocations, for the cases where the target is
> > idle and accessible by the CPU. The benefit is greatly reduced lock
> > contention and improved concurrency by pipelining.
> >
> > Note that forcing the async gpu relocations does reveal a few issues
> > they have. Firstly, is that they are visible as writes to gem_busy,
> > causing to mark some buffers are being to written to by the GPU even
> > though userspace only reads. Secondly is that, in combination with the
> > cmdparser, they can cause priority inversions. This should be the case
> > where the work is being put into a common workqueue losing our priority
> > information and so being executed in FIFO from the worker, denying us
> > the opportunity to reorder the requests afterwards.
> >
> > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> Reviewed-by: Matthew Auld <matthew.auld at intel.com>
Fwiw, if anyone else is as concerned about the priority inversions via
the global system workqueues as am I, we need to teach the CPU scheduler
about our priorities. I am considering a per-CPU kthread and plugging
them into our scheduling backend. That should be then be applicable to
all our async tasks (clflushing, binding, pages, random other tasks).
The devil is in the details of course.
-Chris
More information about the Intel-gfx
mailing list