[Intel-gfx] [PATCH 13/27] drm/i915: Stop manually RCU banging in reset_stats_ioctl (v2)
Jason Ekstrand
jason at jlekstrand.net
Mon May 3 15:57:34 UTC 2021
As far as I can tell, the only real reason for this is to avoid taking a
reference to the i915_gem_context. The cost of those two atomics
probably pales in comparison to the cost of the ioctl itself so we're
really not buying ourselves anything here. We're about to make context
lookup a tiny bit more complicated, so let's get rid of the one hand-
rolled case.
Some usermode drivers such as our Vulkan driver call GET_RESET_STATS on
every execbuf so the perf here could theoretically be an issue. If this
ever does become a performance issue for any such userspace drivers,
they can use set CONTEXT_PARAM_RECOVERABLE to false and look for -EIO
coming from execbuf to check for hangs instead.
v2 (Daniel Vetter):
- Add a comment in the commit message about recoverable contexts
Signed-off-by: Jason Ekstrand <jason at jlekstrand.net>
Reviewed-by: Daniel Vetter <daniel.vetter at ffwll.ch>
---
drivers/gpu/drm/i915/gem/i915_gem_context.c | 13 ++++---------
drivers/gpu/drm/i915/i915_drv.h | 8 +-------
2 files changed, 5 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_context.c b/drivers/gpu/drm/i915/gem/i915_gem_context.c
index 8e254d83d65f2..c073d5939482b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_context.c
@@ -2091,16 +2091,13 @@ int i915_gem_context_reset_stats_ioctl(struct drm_device *dev,
struct drm_i915_private *i915 = to_i915(dev);
struct drm_i915_reset_stats *args = data;
struct i915_gem_context *ctx;
- int ret;
if (args->flags || args->pad)
return -EINVAL;
- ret = -ENOENT;
- rcu_read_lock();
- ctx = __i915_gem_context_lookup_rcu(file->driver_priv, args->ctx_id);
+ ctx = i915_gem_context_lookup(file->driver_priv, args->ctx_id);
if (!ctx)
- goto out;
+ return -ENOENT;
/*
* We opt for unserialised reads here. This may result in tearing
@@ -2117,10 +2114,8 @@ int i915_gem_context_reset_stats_ioctl(struct drm_device *dev,
args->batch_active = atomic_read(&ctx->guilty_count);
args->batch_pending = atomic_read(&ctx->active_count);
- ret = 0;
-out:
- rcu_read_unlock();
- return ret;
+ i915_gem_context_put(ctx);
+ return 0;
}
/* GEM context-engines iterator: for_each_gem_engine() */
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 0b44333eb7033..8571c5c1509a7 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1840,19 +1840,13 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev,
struct dma_buf *i915_gem_prime_export(struct drm_gem_object *gem_obj, int flags);
-static inline struct i915_gem_context *
-__i915_gem_context_lookup_rcu(struct drm_i915_file_private *file_priv, u32 id)
-{
- return xa_load(&file_priv->context_xa, id);
-}
-
static inline struct i915_gem_context *
i915_gem_context_lookup(struct drm_i915_file_private *file_priv, u32 id)
{
struct i915_gem_context *ctx;
rcu_read_lock();
- ctx = __i915_gem_context_lookup_rcu(file_priv, id);
+ ctx = xa_load(&file_priv->context_xa, id);
if (ctx && !kref_get_unless_zero(&ctx->ref))
ctx = NULL;
rcu_read_unlock();
--
2.31.1
More information about the Intel-gfx
mailing list