[Intel-gfx] [PATCH 10/31] drm/i915: Fair low-latency scheduling
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Tue Feb 9 09:37:19 UTC 2021
On 08/02/2021 10:52, Chris Wilson wrote:
> diff --git a/drivers/gpu/drm/i915/Kconfig.profile b/drivers/gpu/drm/i915/Kconfig.profile
> index 35bbe2b80596..f1d009906f71 100644
> --- a/drivers/gpu/drm/i915/Kconfig.profile
> +++ b/drivers/gpu/drm/i915/Kconfig.profile
> @@ -1,3 +1,65 @@
> +choice
> + prompt "Preferred scheduler"
> + default DRM_I915_SCHED_VIRTUAL_DEADLINE
> + help
> + Select the preferred method to decide the order of execution.
> +
> + The scheduler is used for two purposes. First to defer unready
> + jobs to not block execution of independent ready clients, so
> + preventing GPU stalls while work waits for other tasks. The second
> + purpose is to decide which task to run next, as well as decide
> + if that task should preempt the currently running task, or if
> + the current task has exceeded its allotment of GPU time and should
> + be replaced.
> +
> + config DRM_I915_SCHED_FIFO
> + bool "FIFO"
> + help
> + No task reordering, tasks are executed in order of readiness.
> + First in, first out.
> +
> + Unready tasks do not block execution of other, independent clients.
> + A client will not be scheduled for execution until all of its
> + prerequisite work has completed.
> +
> + This disables the scheduler and puts it into a pass-through mode.
> +
> + config DRM_I915_SCHED_PRIORITY
> + bool "Priority"
> + help
> + Strict priority ordering, equal priority tasks are executed
> + in order of readiness. Clients are liable to starve other clients,
> + causing uneven execution and excess task latency. High priority
> + clients will preempt lower priority clients and will run
> + uninterrupted.
> +
> + Note that interactive desktops will implicitly perform priority
> + boosting to minimise frame jitter.
> +
> + config DRM_I915_SCHED_VIRTUAL_DEADLINE
> + bool "Virtual Deadline"
> + help
> + A fair scheduler based on MuQSS with priority-hinting.
> +
> + When a task is ready for execution, it is given a quota (from the
> + engine's timeslice) and a virtual deadline. The virtual deadline is
> + derived from the current time and the timeslice scaled by the
> + task's priority. Higher priority tasks are given an earlier
> + deadline and receive a large portion of the execution bandwidth.
> +
> + Requests are then executed in order of deadline completion.
> + Requests with earlier deadlines and higher priority than currently
> + executing on the engine will preempt the active task.
> +
> +endchoice
> +
> +config DRM_I915_SCHED
> + int
> + default 2 if DRM_I915_SCHED_VIRTUAL_DEADLINE
> + default 1 if DRM_I915_SCHED_PRIORITY
> + default 0 if DRM_I915_SCHED_FIFO
> + default -1
Default -1 would mean it would ask the user and not default to deadline?
Implementation wise it is very neat how you did it so there is basically
very little cost for the compiled out options. And code maintenance cost
to support multiple options is pretty trivial as well.
Only cost I can see is potential bug reports if "wrong" scheduler was
picked by someone. What do you envisage, or who, would be the use cases
for not going with deadline? (I think deadline should be default.)
Then there is a question of how these kconfig will interact, or at least
what their semantics would be, considering the GuC.
I think we can modify the kconfig blurb to say they only apply to
execlists platforms, once we get a GuC scheduling platform upstream. And
fudge some sched mode bits for sysfs reporting in that case.
Regards,
Tvrtko
More information about the Intel-gfx
mailing list