<div dir="ltr"><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Oct 24, 2019 at 6:40 AM Chris Wilson <<a href="mailto:chris@chris-wilson.co.uk">chris@chris-wilson.co.uk</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Our existing behaviour is to allow contexts and their GPU requests to<br>
persist past the point of closure until the requests are complete. This<br>
allows clients to operate in a 'fire-and-forget' manner where they can<br>
setup a rendering pipeline and hand it over to the display server and<br>
immediately exiting. As the rendering pipeline is kept alive until<br>
completion, the display server (or other consumer) can use the results<br>
in the future and present them to the user.<br>
<br>
However, not all clients want this persistent behaviour and would prefer<br>
that the contexts are cleaned up immediately upon closure. This ensures<br>
that when clients are run without hangchecking, any GPU hang is<br>
terminated with the process and does not continue to hog resources.<br>
<br>
By defining a context property to allow clients to control persistence<br>
explicitly, we can remove the blanket advice to disable hangchecking<br>
that seems to be far too prevalent.<br></blockquote><div><br></div><div>Just to be clear, when you say "disable hangchecking" do you mean disabling it for all processes via a kernel parameter at boot time or a sysfs entry or similar? Or is there some mechanism whereby a context can request no hang checking?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
The default behaviour for new controls is the legacy persistence mode.<br>
New clients will have to opt out for immediate cleanup on context<br>
closure. If the hangchecking modparam is disabled, so is persistent<br>
context support -- all contexts will be terminated on closure.<br></blockquote><div><br></div><div>What happens to fences when the context is cancelled? Is it the same behavior as we have today for when a GPU hang is detected and a context is banned?</div><div><br></div><div>--Jason</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Testcase: igt/gem_ctx_persistence<br>
Signed-off-by: Chris Wilson <<a href="mailto:chris@chris-wilson.co.uk" target="_blank">chris@chris-wilson.co.uk</a>><br>
Cc: Joonas Lahtinen <<a href="mailto:joonas.lahtinen@linux.intel.com" target="_blank">joonas.lahtinen@linux.intel.com</a>><br>
Cc: Michał Winiarski <<a href="mailto:michal.winiarski@intel.com" target="_blank">michal.winiarski@intel.com</a>><br>
Cc: Jon Bloomfield <<a href="mailto:jon.bloomfield@intel.com" target="_blank">jon.bloomfield@intel.com</a>><br>
Reviewed-by: Jon Bloomfield <<a href="mailto:jon.bloomfield@intel.com" target="_blank">jon.bloomfield@intel.com</a>><br>
Reviewed-by: Tvrtko Ursulin <<a href="mailto:tvrtko.ursulin@intel.com" target="_blank">tvrtko.ursulin@intel.com</a>><br>
---<br>
drivers/gpu/drm/i915/gem/i915_gem_context.c | 50 ++++++++++++++++++-<br>
drivers/gpu/drm/i915/gem/i915_gem_context.h | 15 ++++++<br>
.../gpu/drm/i915/gem/i915_gem_context_types.h | 1 +<br>
.../gpu/drm/i915/gem/selftests/mock_context.c | 2 +<br>
include/uapi/drm/i915_drm.h | 15 ++++++<br>
5 files changed, 82 insertions(+), 1 deletion(-)<br>
<br>
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c<br>
index 55f1f93c0925..0f1bbf5d1a11 100644<br>
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c<br>
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c<br>
@@ -437,12 +437,39 @@ static void context_close(struct i915_gem_context *ctx)<br>
* case we opt to forcibly kill off all remaining requests on<br>
* context close.<br>
*/<br>
- if (!i915_modparams.enable_hangcheck)<br>
+ if (!i915_gem_context_is_persistent(ctx) ||<br>
+ !i915_modparams.enable_hangcheck)<br>
kill_context(ctx);<br>
<br>
i915_gem_context_put(ctx);<br>
}<br>
<br>
+static int __context_set_persistence(struct i915_gem_context *ctx, bool state)<br>
+{<br>
+ if (i915_gem_context_is_persistent(ctx) == state)<br>
+ return 0;<br>
+<br>
+ if (state) {<br>
+ /*<br>
+ * Only contexts that are short-lived [that will expire or be<br>
+ * reset] are allowed to survive past termination. We require<br>
+ * hangcheck to ensure that the persistent requests are healthy.<br>
+ */<br>
+ if (!i915_modparams.enable_hangcheck)<br>
+ return -EINVAL;<br>
+<br>
+ i915_gem_context_set_persistence(ctx);<br>
+ } else {<br>
+ /* To cancel a context we use "preempt-to-idle" */<br>
+ if (!(ctx->i915->caps.scheduler & I915_SCHEDULER_CAP_PREEMPTION))<br>
+ return -ENODEV;<br>
+<br>
+ i915_gem_context_clear_persistence(ctx);<br>
+ }<br>
+<br>
+ return 0;<br>
+}<br>
+<br>
static struct i915_gem_context *<br>
__create_context(struct drm_i915_private *i915)<br>
{<br>
@@ -477,6 +504,7 @@ __create_context(struct drm_i915_private *i915)<br>
<br>
i915_gem_context_set_bannable(ctx);<br>
i915_gem_context_set_recoverable(ctx);<br>
+ __context_set_persistence(ctx, true /* cgroup hook? */);<br>
<br>
for (i = 0; i < ARRAY_SIZE(ctx->hang_timestamp); i++)<br>
ctx->hang_timestamp[i] = jiffies - CONTEXT_FAST_HANG_JIFFIES;<br>
@@ -633,6 +661,7 @@ i915_gem_context_create_kernel(struct drm_i915_private *i915, int prio)<br>
return ctx;<br>
<br>
i915_gem_context_clear_bannable(ctx);<br>
+ i915_gem_context_set_persistence(ctx);<br>
ctx->sched.priority = I915_USER_PRIORITY(prio);<br>
<br>
GEM_BUG_ON(!i915_gem_context_is_kernel(ctx));<br>
@@ -1743,6 +1772,16 @@ get_engines(struct i915_gem_context *ctx,<br>
return err;<br>
}<br>
<br>
+static int<br>
+set_persistence(struct i915_gem_context *ctx,<br>
+ const struct drm_i915_gem_context_param *args)<br>
+{<br>
+ if (args->size)<br>
+ return -EINVAL;<br>
+<br>
+ return __context_set_persistence(ctx, args->value);<br>
+}<br>
+<br>
static int ctx_setparam(struct drm_i915_file_private *fpriv,<br>
struct i915_gem_context *ctx,<br>
struct drm_i915_gem_context_param *args)<br>
@@ -1820,6 +1859,10 @@ static int ctx_setparam(struct drm_i915_file_private *fpriv,<br>
ret = set_engines(ctx, args);<br>
break;<br>
<br>
+ case I915_CONTEXT_PARAM_PERSISTENCE:<br>
+ ret = set_persistence(ctx, args);<br>
+ break;<br>
+<br>
case I915_CONTEXT_PARAM_BAN_PERIOD:<br>
default:<br>
ret = -EINVAL;<br>
@@ -2272,6 +2315,11 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,<br>
ret = get_engines(ctx, args);<br>
break;<br>
<br>
+ case I915_CONTEXT_PARAM_PERSISTENCE:<br>
+ args->size = 0;<br>
+ args->value = i915_gem_context_is_persistent(ctx);<br>
+ break;<br>
+<br>
case I915_CONTEXT_PARAM_BAN_PERIOD:<br>
default:<br>
ret = -EINVAL;<br>
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.h b/drivers/gpu/drm/i915/gem/i915_gem_context.h<br>
index cfe80590f0ed..18e50a769a6e 100644<br>
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.h<br>
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.h<br>
@@ -76,6 +76,21 @@ static inline void i915_gem_context_clear_recoverable(struct i915_gem_context *c<br>
clear_bit(UCONTEXT_RECOVERABLE, &ctx->user_flags);<br>
}<br>
<br>
+static inline bool i915_gem_context_is_persistent(const struct i915_gem_context *ctx)<br>
+{<br>
+ return test_bit(UCONTEXT_PERSISTENCE, &ctx->user_flags);<br>
+}<br>
+<br>
+static inline void i915_gem_context_set_persistence(struct i915_gem_context *ctx)<br>
+{<br>
+ set_bit(UCONTEXT_PERSISTENCE, &ctx->user_flags);<br>
+}<br>
+<br>
+static inline void i915_gem_context_clear_persistence(struct i915_gem_context *ctx)<br>
+{<br>
+ clear_bit(UCONTEXT_PERSISTENCE, &ctx->user_flags);<br>
+}<br>
+<br>
static inline bool i915_gem_context_is_banned(const struct i915_gem_context *ctx)<br>
{<br>
return test_bit(CONTEXT_BANNED, &ctx->flags);<br>
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h<br>
index fe97b8ba4fda..861d7d92fe9f 100644<br>
--- a/drivers/gpu/drm/i915/gem/i915_gem_context_types.h<br>
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context_types.h<br>
@@ -137,6 +137,7 @@ struct i915_gem_context {<br>
#define UCONTEXT_NO_ERROR_CAPTURE 1<br>
#define UCONTEXT_BANNABLE 2<br>
#define UCONTEXT_RECOVERABLE 3<br>
+#define UCONTEXT_PERSISTENCE 4<br>
<br>
/**<br>
* @flags: small set of booleans<br>
diff --git a/drivers/gpu/drm/i915/gem/selftests/mock_context.c b/drivers/gpu/drm/i915/gem/selftests/mock_context.c<br>
index 74ddd682c9cd..29b8984f0e47 100644<br>
--- a/drivers/gpu/drm/i915/gem/selftests/mock_context.c<br>
+++ b/drivers/gpu/drm/i915/gem/selftests/mock_context.c<br>
@@ -22,6 +22,8 @@ mock_context(struct drm_i915_private *i915,<br>
INIT_LIST_HEAD(&ctx->link);<br>
ctx->i915 = i915;<br>
<br>
+ i915_gem_context_set_persistence(ctx);<br>
+<br>
mutex_init(&ctx->engines_mutex);<br>
e = default_engines(ctx);<br>
if (IS_ERR(e))<br>
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h<br>
index 63d40cba97e0..b5fd5e4bd16e 100644<br>
--- a/include/uapi/drm/i915_drm.h<br>
+++ b/include/uapi/drm/i915_drm.h<br>
@@ -1572,6 +1572,21 @@ struct drm_i915_gem_context_param {<br>
* i915_context_engines_bond (I915_CONTEXT_ENGINES_EXT_BOND)<br>
*/<br>
#define I915_CONTEXT_PARAM_ENGINES 0xa<br>
+<br>
+/*<br>
+ * I915_CONTEXT_PARAM_PERSISTENCE:<br>
+ *<br>
+ * Allow the context and active rendering to survive the process until<br>
+ * completion. Persistence allows fire-and-forget clients to queue up a<br>
+ * bunch of work, hand the output over to a display server and the quit.<br>
+ * If the context is not marked as persistent, upon closing (either via<br>
+ * an explicit DRM_I915_GEM_CONTEXT_DESTROY or implicitly from file closure<br>
+ * or process termination), the context and any outstanding requests will be<br>
+ * cancelled (and exported fences for cancelled requests marked as -EIO).<br>
+ *<br>
+ * By default, new contexts allow persistence.<br>
+ */<br>
+#define I915_CONTEXT_PARAM_PERSISTENCE 0xb<br>
/* Must be kept compact -- no holes and well documented */<br>
<br>
__u64 value;<br>
-- <br>
2.24.0.rc0<br>
<br>
_______________________________________________<br>
Intel-gfx mailing list<br>
<a href="mailto:Intel-gfx@lists.freedesktop.org" target="_blank">Intel-gfx@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/intel-gfx" rel="noreferrer" target="_blank">https://lists.freedesktop.org/mailman/listinfo/intel-gfx</a></blockquote></div></div>