[Intel-gfx] [PATCH 04/10] drm/i915: Shrink the request kmem_cache on allocation error
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Tue Jan 16 10:10:28 UTC 2018
On 15/01/2018 21:24, Chris Wilson wrote:
> If we fail to allocate a new request, make sure we recover the pages
> that are in the process of being freed by inserting an RCU barrier.
>
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> ---
> drivers/gpu/drm/i915/i915_gem_request.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
> index 72bdc203716f..e6d4857b1f78 100644
> --- a/drivers/gpu/drm/i915/i915_gem_request.c
> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
> @@ -696,6 +696,9 @@ i915_gem_request_alloc(struct intel_engine_cs *engine,
> if (ret)
> goto err_unreserve;
>
> + kmem_cache_shrink(dev_priv->requests);
Hm, the one in idle work handler is not enough? Or from another angle,
the kmem_cache_alloc below won't work hard enough to allocate something
regardless?
> + rcu_barrier();
This one is because req cache is RCU? But doesn't that mean freed
requests are immediately available as per:
static void i915_fence_release(struct dma_fence *fence)
{
struct drm_i915_gem_request *req = to_request(fence);
/* The request is put onto a RCU freelist (i.e. the address
* is immediately reused), mark the fences as being freed now.
* Otherwise the debugobjects for the fences are only marked as
* freed when the slab cache itself is freed, and so we would get
* caught trying to reuse dead objects.
*/
Regards,
Tvrtko
> +
> req = kmem_cache_alloc(dev_priv->requests, GFP_KERNEL);
> if (!req) {
> ret = -ENOMEM;
>
More information about the Intel-gfx
mailing list