[Intel-gfx] [RFC PATCH 1/2] drm/doc/rfc: i915 GuC submission / DRM scheduler
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Thu May 27 10:06:38 UTC 2021
On 27/05/2021 00:33, Matthew Brost wrote:
> Add entry for i915 GuC submission / DRM scheduler integration plan.
> Follow up patch with details of new parallel submission uAPI to come.
>
> v2:
> (Daniel Vetter)
> - Expand explaination of why bonding isn't supported for GuC
> submission
> - CC some of the DRM scheduler maintainers
> - Add priority inheritance / boosting use case
> - Add reasoning for removing in order assumptions
> (Daniel Stone)
> - Add links to priority spec
Where will the outstanding items like, from the top of my head only,
error capture and open source logging tool be tracked? I thought here
but maybe not.
Regards,
Tvrtko
> Cc: Christian König <christian.koenig at amd.com>
> Cc: Luben Tuikov <luben.tuikov at amd.com>
> Cc: Alex Deucher <alexander.deucher at amd.com>
> Cc: Steven Price <steven.price at arm.com>
> Cc: Jon Bloomfield <jon.bloomfield at intel.com>
> Cc: Jason Ekstrand <jason at jlekstrand.net>
> Cc: Dave Airlie <airlied at gmail.com>
> Cc: Daniel Vetter <daniel.vetter at intel.com>
> Cc: Jason Ekstrand <jason at jlekstrand.net>
> Cc: dri-devel at lists.freedesktop.org
> Signed-off-by: Matthew Brost <matthew.brost at intel.com>
> ---
> Documentation/gpu/rfc/i915_scheduler.rst | 85 ++++++++++++++++++++++++
> Documentation/gpu/rfc/index.rst | 4 ++
> 2 files changed, 89 insertions(+)
> create mode 100644 Documentation/gpu/rfc/i915_scheduler.rst
>
> diff --git a/Documentation/gpu/rfc/i915_scheduler.rst b/Documentation/gpu/rfc/i915_scheduler.rst
> new file mode 100644
> index 000000000000..7faa46cde088
> --- /dev/null
> +++ b/Documentation/gpu/rfc/i915_scheduler.rst
> @@ -0,0 +1,85 @@
> +=========================================
> +I915 GuC Submission/DRM Scheduler Section
> +=========================================
> +
> +Upstream plan
> +=============
> +For upstream the overall plan for landing GuC submission and integrating the
> +i915 with the DRM scheduler is:
> +
> +* Merge basic GuC submission
> + * Basic submission support for all gen11+ platforms
> + * Not enabled by default on any current platforms but can be enabled via
> + modparam enable_guc
> + * Lots of rework will need to be done to integrate with DRM scheduler so
> + no need to nit pick everything in the code, it just should be
> + functional, no major coding style / layering errors, and not regress
> + execlists
> + * Update IGTs / selftests as needed to work with GuC submission
> + * Enable CI on supported platforms for a baseline
> + * Rework / get CI heathly for GuC submission in place as needed
> +* Merge new parallel submission uAPI
> + * Bonding uAPI completely incompatible with GuC submission, plus it has
> + severe design issues in general, which is why we want to retire it no
> + matter what
> + * New uAPI adds I915_CONTEXT_ENGINES_EXT_PARALLEL context setup step
> + which configures a slot with N contexts
> + * After I915_CONTEXT_ENGINES_EXT_PARALLEL a user can submit N batches to
> + a slot in a single execbuf IOCTL and the batches run on the GPU in
> + paralllel
> + * Initially only for GuC submission but execlists can be supported if
> + needed
> +* Convert the i915 to use the DRM scheduler
> + * GuC submission backend fully integrated with DRM scheduler
> + * All request queues removed from backend (e.g. all backpressure
> + handled in DRM scheduler)
> + * Resets / cancels hook in DRM scheduler
> + * Watchdog hooks into DRM scheduler
> + * Lots of complexity of the GuC backend can be pulled out once
> + integrated with DRM scheduler (e.g. state machine gets
> + simplier, locking gets simplier, etc...)
> + * Execlist backend will do the minimum required to hook in the DRM
> + scheduler so it can live next to the fully integrated GuC backend
> + * Legacy interface
> + * Features like timeslicing / preemption / virtual engines would
> + be difficult to integrate with the DRM scheduler and these
> + features are not required for GuC submission as the GuC does
> + these things for us
> + * ROI low on fully integrating into DRM scheduler
> + * Fully integrating would add lots of complexity to DRM
> + scheduler
> + * Port i915 priority inheritance / boosting feature in DRM scheduler
> + * Used for i915 page flip, may be useful to other DRM drivers as
> + well
> + * Will be an optional feature in the DRM scheduler
> + * Remove in-order completion assumptions from DRM scheduler
> + * Even when using the DRM scheduler the backends will handle
> + preemption, timeslicing, etc... so it is possible for jobs to
> + finish out of order
> + * Pull out i915 priority levels and use DRM priority levels
> + * Optimize DRM scheduler as needed
> +
> +New uAPI for basic GuC submission
> +=================================
> +No major changes are required to the uAPI for basic GuC submission. The only
> +change is a new scheduler attribute: I915_SCHEDULER_CAP_STATIC_PRIORITY_MAP.
> +This attribute indicates the 2k i915 user priority levels are statically mapped
> +into 3 levels as follows:
> +
> +* -1k to -1 Low priority
> +* 0 Medium priority
> +* 1 to 1k High priority
> +
> +This is needed because the GuC only has 4 priority bands. The highest priority
> +band is reserved with the kernel. This aligns with the DRM scheduler priority
> +levels too.
> +
> +Spec references:
> +----------------
> +https://www.khronos.org/registry/EGL/extensions/IMG/EGL_IMG_context_priority.txt
> +https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/chap5.html#devsandqueues-priority
> +https://spec.oneapi.com/level-zero/latest/core/api.html#ze-command-queue-priority-t
> +
> +New parallel submission uAPI
> +============================
> +Details to come in a following patch.
> diff --git a/Documentation/gpu/rfc/index.rst b/Documentation/gpu/rfc/index.rst
> index 05670442ca1b..91e93a705230 100644
> --- a/Documentation/gpu/rfc/index.rst
> +++ b/Documentation/gpu/rfc/index.rst
> @@ -19,3 +19,7 @@ host such documentation:
> .. toctree::
>
> i915_gem_lmem.rst
> +
> +.. toctree::
> +
> + i915_scheduler.rst
>
More information about the Intel-gfx
mailing list