[Intel-gfx] [Mesa-dev] [PATCH] i965: Share the workaround bo between all contexts

Chris Wilson chris at chris-wilson.co.uk
Thu Jan 26 18:05:35 UTC 2017

On Thu, Jan 26, 2017 at 09:39:51AM -0800, Chad Versace wrote:
> On Thu 26 Jan 2017, Chris Wilson wrote:
> > Since the workaround bo is used strictly as a write-only buffer, we need
> > only allocate one per screen and use the same one from all contexts.
> > 
> > (The caveat here is during extension initialisation, where we write into
> > and read back register values from the buffer, but that is performed only
> > once for the first context - and baring synchronisation issues should not
> > be a problem. Safer would be to move that also to the screen.)
> > 
> > v2: Give the workaround bo its own init function and don't piggy back
> > intel_bufmgr_init() since it is not that related.
> > 
> > v3: Drop the reference count of the workaround bo for the context since
> > the context itself is owned by the screen (and so we can rely on the bo
> > existing for the lifetime of the context).
> I like this idea, but I have questions and comments about the details.
> More questions than comments, really.
> Today, with only Mesa changes, could we effectively do the same as
>   drm_intel_gem_bo_disable_implicit_sync(screen->workaround_bo);
> by hacking Mesa to set no read/write domain when emitting relocs for the
> workaround_bo? (I admit I don't fully understand the kernel's domain
> tracking). If that does work, then it just would require a small hack to
> brw_emit_pipe_control_write().

Yes, for anything that is totally scratch just not setting the write
hazard is the same. For something like the seqno page where we have
multiple engines that we do want to be preserved, not settting the write
hazzard had the consequence that page could be lost under memory pressure
or across resume. (As usual there are some details that this part of the
ABI had to be relaxed because userspace didn't have this flag.)
But that doesn't sell many bananas.

Chris Wilson, Intel Open Source Technology Centre

More information about the Intel-gfx mailing list