[Intel-gfx] [PATCH 49/49] drm/i915/bdw: Document execlists and logical ring contexts

oscar.mateo at intel.com oscar.mateo at intel.com
Thu Mar 27 19:00:18 CET 2014


From: Oscar Mateo <oscar.mateo at intel.com>

Explain i915_lrc.c with some execlists notes

Signed-off-by: Thomas Daniel <thomas.daniel at intel.com>

v2: Add notes on logical ring context creation.

Signed-off-by: Oscar Mateo <oscar.mateo at intel.com>
---
 drivers/gpu/drm/i915/i915_lrc.c | 78 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_lrc.c b/drivers/gpu/drm/i915/i915_lrc.c
index 025dae7..521abe9 100644
--- a/drivers/gpu/drm/i915/i915_lrc.c
+++ b/drivers/gpu/drm/i915/i915_lrc.c
@@ -33,8 +33,86 @@
  * These expanded contexts enable a number of new abilities, especially
  * "Execlists" (also implemented in this file).
  *
+ * One of the main differences with the legacy HW contexts is that logical
+ * ring contexts incorporate many more things to the context's state, like
+ * PDPs or ringbuffer control registers.
+ *
+ * Regarding the creation of contexts, we had before:
+ *
+ * - One global default context.
+ * - One local default context for each opened fd.
+ * - One extra context for each context create ioctl call.
+ *
+ * Now that ringbuffers belong per-context (and not per-engine, like before) and
+ * that contexts are uniquely tied to a given engine (and not reusable, like
+ * before) we need:
+ *
+ * - One global default context for each engine.
+ * - Up to "no. of engines" local default contexts for each opened fd.
+ * - Up to "no. of engines" extra local contexts for each context create ioctl.
+ *
+ * Given that at creation time of a non-global context we don't know which
+ * engine is going to use it, we have implemented a deferred creation of
+ * LR contexts: the local default context starts its life as a hollow or
+ * blank holder, that gets populated once we receive an execbuffer ioctl on
+ * that fd. If later on we receive another execbuffer ioctl for a different
+ * engine, we create a second local default context and so on. The same rules
+ * apply to create context.
+ *
  * Execlists are the new method by which, on gen8+ hardware, workloads are
  * submitted for execution (as opposed to the legacy, ringbuffer-based, method).
+ * This method works as follows:
+ *
+ * When a request is committed, its commands (the BB start and any leading or
+ * trailing commands, like the seqno breadcrumbs) are placed in the ringbuffer
+ * for the appropriate context. The tail pointer in the hardware context is not
+ * updated at this time, but instead, kept by the driver in the ringbuffer
+ * structure. A structure representing this request is added to a request queue
+ * for the appropriate engine: this structure contains a copy of the context's
+ * tail after the request was written to the ring buffer and a pointer to the
+ * context itself.
+ *
+ * If the engine's request queue was empty before the request was added, the
+ * queue is processed immediately. Otherwise the queue will be processed during
+ * a context switch interrupt. In any case, elements on the queue will get sent
+ * (in pairs) to the GPU's ExecLists Submit Port (ELSP, for short) with a
+ * globally unique 20-bits context ID (constructed with the fd's ID, plus our
+ * own context ID, plus the engine's ID).
+ *
+ * When execution of a request completes, the GPU updates the context status
+ * buffer with a context complete event and generates a context switch interrupt.
+ * During context switch interrupt handling, the driver examines the context
+ * status events in the context status buffer: for each context complete event,
+ * if the announced ID matches that on the head of the request queue then that
+ * request is retired and removed from the queue.
+ *
+ * After processing, if any requests were retired and the queue is not empty
+ * then a new execution list can be submitted. The two requests at the front of
+ * the queue are next to be submitted but since a context may not occur twice in
+ * an execution list, if subsequent requests have the same ID as the first then
+ * the two requests must be combined. This is done simply by discarding requests
+ * at the head of the queue until either only one requests is left (in which case
+ * we use a NULL second context) or the first two requests have unique IDs.
+ *
+ * By always executing the first two requests in the queue the driver ensures
+ * that the GPU is kept as busy as possible. In the case where a single context
+ * completes but a second context is still executing, the request for the second
+ * context will be at the head of the queue when we remove the first one. This
+ * request will then be resubmitted along with a new request for a different context,
+ * which will cause the hardware to continue executing the second request and queue
+ * the new request (the GPU detects the condition of a context getting preempted
+ * with the same context and optimizes the context switch flow by not doing
+ * preemption, but just sampling the new tail pointer).
+ *
+ * Because the GPU continues to execute while the context switch interrupt is being
+ * handled, there is a race condition where a second context completes while
+ * handling the completion of the previous. This results in the second context being
+ * resubmitted (potentially along with a third), and an extra context complete event
+ * for that context will occur. The request will be removed from the queue at the
+ * first context complete event, and the second context complete event will not
+ * result in removal of a request from the queue because the IDs of the request
+ * and the event will not match.
+ *
  */
 
 #include <drm/drmP.h>
-- 
1.9.0




More information about the Intel-gfx mailing list