[Mesa-dev] [PATCH 0/7] i965: Stop hanging on Haswell

Chris Wilson chris at chris-wilson.co.uk
Thu Jun 15 16:32:32 UTC 2017


Quoting Jason Ekstrand (2017-06-15 16:58:13)
> On Thu, Jun 15, 2017 at 4:15 AM, Chris Wilson <chris at chris-wilson.co.uk> wrote:
> 
>     Quoting Kenneth Graunke (2017-06-14 21:44:45)
>     > If Chris is right, and what we're really seeing is that MI_SET_CONTEXT
>     > needs additional flushing, it probably makes sense to fix the kernel.
>     > If it's really fast clear related, then we should do it in Mesa.
> 
>     If I'm right, it's more of a userspace problem because you have to
>     insert a pipeline stall before STATE_BASE_ADDRESS when switching between
>     blorp/normal and back again, in the same batch. That the MI_SET_CONTEXT
>     may be restoring the dirty GPU state from the previous batch just means
>     that
>     you have to think of batches as being one long continuous batch.
>     -Chris
> 
> 
>  Given that, I doubt your explanation is correct.  Right now, we should be
> correct under the "long continuous batch" assumption and we're hanging.  So I
> think that either MI_SET_CONTEXT doesn't stall hard enough or we're conflicting
> with another process somehow.

What I said was too simplistic, as the MI_SET_CONTEXT would be
introducing side-effects (such as the pipeline being active, hmm, unless
it does flush at the end!). What I mean is that if it is
MI_SET_CONTEXT causing the pipeline to be active, you would need to
treat switching operations within the same pipeline equally. That you
would need a pipeline stall after a blorp/hiz not just to ensure the
data is written but to ensure that the STATE_BASE_ADDRESS doesn't trip
up.

Of course, now I said that it would be a side-effect of MI_SET_CONTEXT
causing the state of the GPU pipelines to be different from expectation,
it becomes the kernel responsibility to add the flush. Argh!

I'm open to putting it into the kernel, though I'd rather userspace
handled it. We want to keep the kernel out of the loop as much as
possible.
-Chris


More information about the mesa-dev mailing list