[Intel-gfx] [PATCH 00/40] GPU scheduler for i915 driver

John.C.Harrison at Intel.com John.C.Harrison at Intel.com
Fri Dec 11 05:23:43 PST 2015

From: John Harrison <John.C.Harrison at Intel.com>

Implemented a batch buffer submission scheduler for the i915 DRM driver.

The general theory of operation is that when batch buffers are
submitted to the driver, the execbuffer() code assigns a unique seqno
value and then packages up all the information required to execute the
batch buffer at a later time. This package is given over to the
scheduler which adds it to an internal node list. The scheduler also
scans the list of objects associated with the batch buffer and
compares them against the objects already in use by other buffers in
the node list. If matches are found then the new batch buffer node is
marked as being dependent upon the matching node. The same is done for
the context object. The scheduler also bumps up the priority of such
matching nodes on the grounds that the more dependencies a given batch
buffer has the more important it is likely to be.

The scheduler aims to have a given (tuneable) number of batch buffers
in flight on the hardware at any given time. If fewer than this are
currently executing when a new node is queued, then the node is passed
straight through to the submit function. Otherwise it is simply added
to the queue and the driver returns back to user land.

As each batch buffer completes, it raises an interrupt which wakes up
the scheduler. Note that it is possible for multiple buffers to
complete before the IRQ handler gets to run. Further, the seqno values
of the individual buffers are not necessary incrementing as the
scheduler may have re-ordered their submission. However, the scheduler
keeps the list of executing buffers in order of hardware submission.
Thus it can scan through the list until a matching seqno is found and
then mark all in flight nodes from that point on as completed.

A deferred work queue is also poked by the interrupt handler. When
this wakes up it can do more involved processing such as actually
removing completed nodes from the queue and freeing up the resources
associated with them (internal memory allocations, DRM object
references, context reference, etc.). The work handler also checks the
in flight count and calls the submission code if a new slot has

When the scheduler's submit code is called, it scans the queued node
list for the highest priority node that has no unmet dependencies.
Note that the dependency calculation is complex as it must take
inter-ring dependencies and potential preemptions into account. Note
also that in the future this will be extended to include external
dependencies such as the Android Native Sync file descriptors and/or
the linux dma-buff synchronisation scheme.

If a suitable node is found then it is sent to execbuff_final() for
submission to the hardware. The in flight count is then re-checked and
a new node popped from the list if appropriate.

The scheduler also allows high priority batch buffers (e.g. from a
desktop compositor) to jump ahead of whatever is already running if
the underlying hardware supports pre-emption. In this situation, any
work that was pre-empted is returned to the queued list ready to be
resubmitted when no more high priority work is outstanding.

Various IGT tests are in progress to test the scheduler's operation
and will follow.

v2: Updated for changes in struct fence patch series and other changes
to underlying tree (e.g. removal of cliprects). Also changed priority
levels to be signed +/-1023 range and reduced mutex lock usage.

v3: More reuse of cached pointers rather than repeated dereferencing
(David Gordon).

Moved the dependency generation code out to a seperate function for
easier readability. Also added in support for the read-read

Major simplification of the DRM file close handler.

Fixed up an overzealous WARN.

Removed unnecessary flushing of the scheduler queue when waiting for a

[Patches against drm-intel-nightly tree fetched 17/11/2015 with struct
fence conversion patches applied]

Dave Gordon (3):
  drm/i915: Updating assorted register and status page definitions
  drm/i915: Cache request pointer in *_submission_final()
  drm/i915: Add scheduling priority to per-context parameters

John Harrison (37):
  drm/i915: Add total count to context status debugfs output
  drm/i915: Explicit power enable during deferred context initialisation
  drm/i915: Prelude to splitting i915_gem_do_execbuffer in two
  drm/i915: Split i915_dem_do_execbuffer() in half
  drm/i915: Re-instate request->uniq because it is extremely useful
  drm/i915: Start of GPU scheduler
  drm/i915: Prepare retire_requests to handle out-of-order seqnos
  drm/i915: Disable hardware semaphores when GPU scheduler is enabled
  drm/i915: Force MMIO flips when scheduler enabled
  drm/i915: Added scheduler hook when closing DRM file handles
  drm/i915: Added scheduler hook into i915_gem_request_notify()
  drm/i915: Added deferred work handler for scheduler
  drm/i915: Redirect execbuffer_final() via scheduler
  drm/i915: Keep the reserved space mechanism happy
  drm/i915: Added tracking/locking of batch buffer objects
  drm/i915: Hook scheduler node clean up into retire requests
  drm/i915: Added scheduler support to __wait_request() calls
  drm/i915: Added scheduler support to page fault handler
  drm/i915: Added scheduler flush calls to ring throttle and idle functions
  drm/i915: Added a module parameter for allowing scheduler overrides
  drm/i915: Support for 'unflushed' ring idle
  drm/i915: Defer seqno allocation until actual hardware submission time
  drm/i915: Added immediate submission override to scheduler
  drm/i915: Add sync wait support to scheduler
  drm/i915: Connecting execbuff fences to scheduler
  drm/i915: Added trace points to scheduler
  drm/i915: Added scheduler queue throttling by DRM file handle
  drm/i915: Added debugfs interface to scheduler tuning parameters
  drm/i915: Added debug state dump facilities to scheduler
  drm/i915: Add early exit to execbuff_final() if insufficient ring space
  drm/i915: Added scheduler statistic reporting to debugfs
  drm/i915: Added seqno values to scheduler status dump
  drm/i915: Add scheduler support functions for TDR
  drm/i915: GPU priority bumping to prevent starvation
  drm/i915: Scheduler state dump via debugfs
  drm/i915: Enable GPU scheduler by default
  drm/i915: Allow scheduler to manage inter-ring object synchronisation

 drivers/gpu/drm/i915/Makefile              |    1 +
 drivers/gpu/drm/i915/i915_debugfs.c        |  283 +++++
 drivers/gpu/drm/i915/i915_dma.c            |    6 +
 drivers/gpu/drm/i915/i915_drv.c            |    9 +
 drivers/gpu/drm/i915/i915_drv.h            |   56 +-
 drivers/gpu/drm/i915/i915_gem.c            |  170 ++-
 drivers/gpu/drm/i915/i915_gem_context.c    |   24 +
 drivers/gpu/drm/i915/i915_gem_execbuffer.c |  350 ++++--
 drivers/gpu/drm/i915/i915_params.c         |    4 +
 drivers/gpu/drm/i915/i915_reg.h            |   30 +-
 drivers/gpu/drm/i915/i915_scheduler.c      | 1640 ++++++++++++++++++++++++++++
 drivers/gpu/drm/i915/i915_scheduler.h      |  175 +++
 drivers/gpu/drm/i915/i915_trace.h          |  215 +++-
 drivers/gpu/drm/i915/intel_display.c       |   10 +-
 drivers/gpu/drm/i915/intel_lrc.c           |  166 ++-
 drivers/gpu/drm/i915/intel_lrc.h           |    1 +
 drivers/gpu/drm/i915/intel_ringbuffer.c    |   47 +-
 drivers/gpu/drm/i915/intel_ringbuffer.h    |   35 +-
 include/uapi/drm/i915_drm.h                |    1 +
 19 files changed, 3057 insertions(+), 166 deletions(-)
 create mode 100644 drivers/gpu/drm/i915/i915_scheduler.c
 create mode 100644 drivers/gpu/drm/i915/i915_scheduler.h


More information about the Intel-gfx mailing list