[Intel-gfx] [PATCH v2 2/2] drm/i915/bxt: work around HW context corruption due to coherency problem

Imre Deak imre.deak at intel.com
Thu Sep 17 09:17:44 PDT 2015


The execlist context object is mapped with a CPU/GPU coherent mapping
everywhere, but on BXT A stepping due to a HW issue the coherency is not
guaranteed. To work around this flush the context object after pinning
it (to flush cache lines left by the context initialization/read-back
from backing storage) and mark it as uncached so later updates during
context switching will be coherent.

I noticed this problem via a GPU hang, where IPEHR pointed to an invalid
opcode value. I couldn't find this value on the ring but looking at the
contents of the active context object it turned out to be a parameter
dword of a bigger command there. The original command opcode itself
was zeroed out, based on the above I assume due to a CPU writeback of
the corresponding cacheline. When restoring the context the GPU would
jump over the zeroed out opcode and hang when trying to execute the
above parameter dword.

I could easily reproduce this by running igt/gem_render_copy_redux and
gem_tiled_blits/basic in parallel, but I guess it could be triggered by
anything involving frequent switches between two separate contexts. With
this workaround I couldn't reproduce the problem.

v2:
- instead of clflushing after updating the tail and PDP values during
  context switching, map the corresponding page as uncached to avoid a
  race between CPU and GPU, both updating the same cacheline at the same
  time (Ville)

Signed-off-by: Imre Deak <imre.deak at intel.com>
---
 drivers/gpu/drm/i915/intel_lrc.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 942069f..f6873a0 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1036,6 +1036,17 @@ static int intel_lr_context_do_pin(struct intel_engine_cs *ring,
 	if (ret)
 		return ret;
 
+	if (IS_BROXTON(dev_priv) && INTEL_REVID(dev_priv) < BXT_REVID_B0) {
+		struct page *page;
+
+		drm_clflush_sg(ctx_obj->pages);
+		page = i915_gem_object_get_page(ctx_obj, LRC_STATE_PN);
+		ret = set_pages_uc(page, 1);
+		if (ret)
+			goto unpin_ctx_obj;
+	}
+
+
 	ret = intel_pin_and_map_ringbuffer_obj(ring->dev, ringbuf);
 	if (ret)
 		goto unpin_ctx_obj;
@@ -1076,12 +1087,21 @@ reset_pin_count:
 void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
 {
 	struct intel_engine_cs *ring = rq->ring;
+	struct drm_i915_private *dev_priv = rq->i915;
 	struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring->id].state;
 	struct intel_ringbuffer *ringbuf = rq->ringbuf;
 
 	if (ctx_obj) {
 		WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
 		if (--rq->ctx->engine[ring->id].pin_count == 0) {
+			if (IS_BROXTON(dev_priv) &&
+			    INTEL_REVID(dev_priv) < BXT_REVID_B0) {
+				struct page *page;
+
+				page = i915_gem_object_get_page(ctx_obj,
+								LRC_STATE_PN);
+				WARN_ON_ONCE(set_pages_wb(page, 1));
+			}
 			intel_unpin_ringbuffer_obj(ringbuf);
 			i915_gem_object_ggtt_unpin(ctx_obj);
 		}
-- 
2.1.4



More information about the Intel-gfx mailing list