[Intel-gfx] [PATCH v2 2/2] drm/i915/bxt: work around HW context corruption due to coherency problem

Imre Deak imre.deak at intel.com
Wed Sep 23 08:40:55 PDT 2015


On ke, 2015-09-23 at 15:17 +0100, Chris Wilson wrote:
> On Wed, Sep 23, 2015 at 04:57:48PM +0300, Imre Deak wrote:
> > On ke, 2015-09-23 at 14:39 +0100, Chris Wilson wrote:
> > > On Wed, Sep 23, 2015 at 03:35:59PM +0200, Daniel Vetter wrote:
> > > > On Fri, Sep 18, 2015 at 10:02:24AM +0100, Chris Wilson wrote:
> > > > > On Thu, Sep 17, 2015 at 07:17:44PM +0300, Imre Deak wrote:
> > > > > > The execlist context object is mapped with a CPU/GPU coherent mapping
> > > > > > everywhere, but on BXT A stepping due to a HW issue the coherency is not
> > > > > > guaranteed. To work around this flush the context object after pinning
> > > > > > it (to flush cache lines left by the context initialization/read-back
> > > > > > from backing storage) and mark it as uncached so later updates during
> > > > > > context switching will be coherent.
> > > > > > 
> > > > > > I noticed this problem via a GPU hang, where IPEHR pointed to an invalid
> > > > > > opcode value. I couldn't find this value on the ring but looking at the
> > > > > > contents of the active context object it turned out to be a parameter
> > > > > > dword of a bigger command there. The original command opcode itself
> > > > > > was zeroed out, based on the above I assume due to a CPU writeback of
> > > > > > the corresponding cacheline. When restoring the context the GPU would
> > > > > > jump over the zeroed out opcode and hang when trying to execute the
> > > > > > above parameter dword.
> > > > > > 
> > > > > > I could easily reproduce this by running igt/gem_render_copy_redux and
> > > > > > gem_tiled_blits/basic in parallel, but I guess it could be triggered by
> > > > > > anything involving frequent switches between two separate contexts. With
> > > > > > this workaround I couldn't reproduce the problem.
> > > > > > 
> > > > > > v2:
> > > > > > - instead of clflushing after updating the tail and PDP values during
> > > > > >   context switching, map the corresponding page as uncached to avoid a
> > > > > >   race between CPU and GPU, both updating the same cacheline at the same
> > > > > >   time (Ville)
> > > > > 
> > > > > No. Changing PAT involves a stop_machine() and is severely detrimental
> > > > > to performance (context creation overhead does impact userspace).
> > > > > Mapping it as uncached doesn't remove the race anyway.
> > > > 
> > > > Yeah it's not pretty, but otoh it's for A stepping and we'll kill it again
> > > > once bxt is shipping. I think with a big "IXME: dont ever dare to copy
> > > > this" comment this is acceptable. It's not really the worst "my gpu
> > > > crawls" workaround we've seend for early hw ...
> > > 
> > > Thinking about this, an incoherent TAIL write cannot cause IPEHR !=
> > > *ACTHD. The flush is just papering over the absence of a flush elsewhere
> > > and the root cause remains unfixed.
> > 
> > It's not the TAIL write that causing IPEHR != *ACTHD. I tried to explain
> > this in the commit message, I guess it was not clear. The lack of flush
> > atm means that after initializing the context by CPU those initialized
> > values will be not necessarily visible by the GPU (due to the coherency
> > problem). These initialized values also include the zeroed out values
> > returned by shmem/GUP. Eventually this context will get scheduled in/out
> > resulting in a context save by the GPU. After this context save we still
> > have the contents of the context in the CPU cache that are possibly not
> > coherent with the GPU. If now one of these cache lines is evicted (due
> > to some unrelated cache thrashing) the values stored by the GPU context
> > save will get overwritten by the stale CPU cache values. This is exactly
> > what I saw, zeroed out values in the context where there should've been
> > valid command dwords.
> 
> (Sorry read invalid as unexpected, since you said you couldn't find the
> valud in the ring whereas what you meant to say was you couldn't find
> the *address*. GPU hang pointing to invalid is nothing unusual.)

I was looking for the IPEHR value in the ring and not finding there
started to look at the context where I did find it. But basically yes,
this means the same as not finding the address of the invalid
instruction. 

> That is exactly the reason why we use set-to-domain on legacy context
> objects. The standard practice for doing coherent writes is either
> mapping the page into the GTT and so doing WC writes, or by explicit
> clflush. I do not see why we want to do anything unusual here.

Ok. I was thinking of mapping it to GTT after you noticed the overhead
problem. This would mean mapping it to a mappable area and recently
there was a change to map the context to a high offset in GTT
(GUC_WOPCM_TOP). I guess that prevents us to map it through GTT, unless
we create an aliased mapping. I don't know if creating aliases within
GGTT is allowed.  

Having an explicit flush leaves in the race Ville noticed: when we would
do this flush after writing the TAIL during a light restore context
switch, we would flush the CPU cacheline containing the HEAD value too.
In the same time the GPU may update the HEAD value concurrently doing a
context save. Writing TAIL via an uncached mapping gets rid of this
race, since then we don't have the side-effect of overwriting the HEAD
value. 

--Imre



More information about the Intel-gfx mailing list