[Intel-gfx] [PATCH 12/13] drm/i915: Async GPU relocation processing

Joonas Lahtinen joonas.lahtinen at linux.intel.com
Mon Apr 3 13:54:55 UTC 2017


On ke, 2017-03-29 at 16:56 +0100, Chris Wilson wrote:
> If the user requires patching of their batch or auxiliary buffers, we
> currently make the alterations on the cpu. If they are active on the GPU
> at the time, we wait under the struct_mutex for them to finish executing
> before we rewrite the contents. This happens if shared relocation trees
> are used between different contexts with separate address space (and the
> buffers then have different addresses in each), the 3D state will need
> to be adjusted between execution on each context. However, we don't need
> to use the CPU to do the relocation patching, as we could queue commands
> to the GPU to perform it and use fences to serialise the operation with
> the current activity and future - so the operation on the GPU appears
> just as atomic as performing it immediately. Performing the relocation
> rewrites on the GPU is not free, in terms of pure throughput, the number
> of relocations/s is about halved - but more importantly so is the time
> under the struct_mutex.
> 
> v2: Break out the request/batch allocation for clearer error flow.
> 
> Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>

<SNIP>

>  static void reloc_cache_reset(struct reloc_cache *cache)
>  {
>  	void *vaddr;
>  
> +	if (cache->rq)
> +		reloc_gpu_flush(cache);

An odd place to do the flush, I was expecting GEM_BUG_ON(cache->rq);

The instruction generation I've gone through in one spot in the code,
no intention going over it more times.

Reviewed-by: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>

Regards, Joonas
-- 
Joonas Lahtinen
Open Source Technology Center
Intel Corporation


More information about the Intel-gfx mailing list