[PATCH] drm/i915: Fix potential context UAFs

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Wed Jan 4 09:33:54 UTC 2023


On 03/01/2023 23:49, Rob Clark wrote:
> From: Rob Clark <robdclark at chromium.org>
> 
> gem_context_register() makes the context visible to userspace, and which
> point a separate thread can trigger the I915_GEM_CONTEXT_DESTROY ioctl.
> So we need to ensure that nothing uses the ctx ptr after this.  And we
> need to ensure that adding the ctx to the xarray is the *last* thing
> that gem_context_register() does with the ctx pointer.

Any backtraces from oopses or notes on how it was found to record in the commit message?

> Signed-off-by: Rob Clark <robdclark at chromium.org>

Fixes: a4c1cdd34e2c ("drm/i915/gem: Delay context creation (v3)")
References: 3aa9945a528e ("drm/i915: Separate GEM context construction and registration to userspace")
Cc: <stable at vger.kernel.org> # v5.15+

> ---
>   drivers/gpu/drm/i915/gem/i915_gem_context.c | 24 +++++++++++++++------
>   1 file changed, 18 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> index 7f2831efc798..6250de9b9196 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
> @@ -1688,6 +1688,10 @@ void i915_gem_init__contexts(struct drm_i915_private *i915)
>   	init_contexts(&i915->gem.contexts);
>   }
>   
> +/*
> + * Note that this implicitly consumes the ctx reference, by placing
> + * the ctx in the context_xa.
> + */
>   static void gem_context_register(struct i915_gem_context *ctx,
>   				 struct drm_i915_file_private *fpriv,
>   				 u32 id)
> @@ -1703,10 +1707,6 @@ static void gem_context_register(struct i915_gem_context *ctx,
>   	snprintf(ctx->name, sizeof(ctx->name), "%s[%d]",
>   		 current->comm, pid_nr(ctx->pid));
>   
> -	/* And finally expose ourselves to userspace via the idr */
> -	old = xa_store(&fpriv->context_xa, id, ctx, GFP_KERNEL);
> -	WARN_ON(old);
> -
>   	spin_lock(&ctx->client->ctx_lock);
>   	list_add_tail_rcu(&ctx->client_link, &ctx->client->ctx_list);
>   	spin_unlock(&ctx->client->ctx_lock);
> @@ -1714,6 +1714,10 @@ static void gem_context_register(struct i915_gem_context *ctx,
>   	spin_lock(&i915->gem.contexts.lock);
>   	list_add_tail(&ctx->link, &i915->gem.contexts.list);
>   	spin_unlock(&i915->gem.contexts.lock);
> +
> +	/* And finally expose ourselves to userspace via the idr */
> +	old = xa_store(&fpriv->context_xa, id, ctx, GFP_KERNEL);
> +	WARN_ON(old);

Have you seen that this hunk is needed or just moving it for a good measure? To be clear, it is probably best to move it even if the current placement cannot cause any problems, I am just double-checking if you had any concrete observations here while mulling over easier stable backports if we would omit it.

>   }
>   
>   int i915_gem_context_open(struct drm_i915_private *i915,
> @@ -2199,14 +2203,22 @@ finalize_create_context_locked(struct drm_i915_file_private *file_priv,
>   	if (IS_ERR(ctx))
>   		return ctx;
>   
> +	/*
> +	 * One for the xarray and one for the caller.  We need to grab
> +	 * the reference *prior* to making the ctx visble to userspace
> +	 * in gem_context_register(), as at any point after that
> +	 * userspace can try to race us with another thread destroying
> +	 * the context under our feet.
> +	 */
> +	i915_gem_context_get(ctx);
> +
>   	gem_context_register(ctx, file_priv, id);
>   
>   	old = xa_erase(&file_priv->proto_context_xa, id);
>   	GEM_BUG_ON(old != pc);
>   	proto_context_close(file_priv->dev_priv, pc);
>   
> -	/* One for the xarray and one for the caller */
> -	return i915_gem_context_get(ctx);
> +	return ctx;

Otherwise userspace can look up a context which hasn't had it's reference count increased yep. I can add the Fixes: and Stable: tags while merging if no complaints.

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin at intel.com>

Regards,

Tvrtko

>   }
>   
>   struct i915_gem_context *


More information about the dri-devel mailing list