[Intel-gfx] [PATCH v2] drm/i915: Pre-allocation of shmem pages of a GEM object

Akash Goel akash.goel at intel.com
Mon May 5 15:05:00 CEST 2014


On Mon, 2014-05-05 at 09:17 +0100, Chris Wilson wrote:
> On Mon, May 05, 2014 at 09:55:29AM +0530, akash.goel at intel.com wrote:
> > From: Akash Goel <akash.goel at intel.com>
> > 
> > This patch could help to reduce the time, 'struct_mutex' is kept
> > locked during either the exec-buffer path or Page fault
> > handling path as now the backing pages are requested from shmem layer
> > without holding the 'struct_mutex'.
> > 
> > v2: Fixed the merge issue, due to which 'exec_lock' mutex was not released.
> 
> This would be a good excuse to work on per-object locks and augmenting
> i915_gem_madvise_ioctl() to grab pages. iow, add obj->mutex and use that
> for guarding all obj->pages related members/operations, then add
> I915_MADV_POPULATE which can run without the struct mutex.
> 
> That should provide you with the lockless get_pages and keep execbuffer
> reasonably clean and fast.

Yes the per object lock would be a more cleaner approach here.
But it could take some time to implement it, so time being can we
consider this as a stopgap solution.

> Again, please think about why you are *clflushing* so many pages so
> often. That is a sign of userspace bo cache failure.
> -Chris
> 

Sorry not sure that whether I understood your point here, but we are not
doing any extra clflush here, just doing the needful.
Any newly allocated buffer from shmem, is by default marked as to be
present in CPU domain, so when it is being submitted to rendering on
GPU, all the pages of this buffer are 'clflushed'. 
This is probably to ensure that any stale data for this buffer in CPU
cache is flushed out, before GPU actually starts writing to this buffer,
otherwise the data written by GPU could be overwritten with stale data
in CPU cache subsequently.

Sometimes there is an odd need to process buffers of huge size (like ~34
MB) and that's where the user space bo cache might also fail. 

Best regards
Akash




More information about the Intel-gfx mailing list