[PATCH 1/3] drm/i915/breadcrumbs: Set the seqno coherency flag for new principle waiters
Chris Wilson
chris at chris-wilson.co.uk
Sun Jul 8 18:34:52 UTC 2018
If we insert the current waiter into the principle irq slot, there is a
possiblity that we do so after the interrupt is handled and so miss
flagging the need to check the coherency.
The window of opportunity was introduced by commit 3f88325c2e39
("drm/i915: Reduce spinlock hold time during notify_ring() interrupt"),
as the goal there was not to set the bit until required.
Fixes: 3f88325c2e39 ("drm/i915: Reduce spinlock hold time during notify_ring() interrupt")
Testcase: igt/drv_selftest/live_hangcheck # hsw, sporadic
Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala at linux.intel.com>
---
drivers/gpu/drm/i915/intel_breadcrumbs.c | 26 ++++++++++++++++++------
1 file changed, 20 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index 1db6ba7d926e..14e946c2ad00 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -170,7 +170,8 @@ static void irq_enable(struct intel_engine_cs *engine)
*/
GEM_BUG_ON(!intel_irqs_enabled(engine->i915));
- /* Enabling the IRQ may miss the generation of the interrupt, but
+ /*
+ * Enabling the IRQ may miss the generation of the interrupt, but
* we still need to force the barrier before reading the seqno,
* just in case.
*/
@@ -397,7 +398,8 @@ static bool __intel_engine_add_wait(struct intel_engine_cs *engine,
GEM_BUG_ON(!wait->seqno);
- /* Insert the request into the retirement ordered list
+ /*
+ * Insert the request into the retirement ordered list
* of waiters by walking the rbtree. If we are the oldest
* seqno in the tree (the first to be retired), then
* set ourselves as the bottom-half.
@@ -415,7 +417,8 @@ static bool __intel_engine_add_wait(struct intel_engine_cs *engine,
completed = NULL;
seqno = intel_engine_get_seqno(engine);
- /* If the request completed before we managed to grab the spinlock,
+ /*
+ * If the request completed before we managed to grab the spinlock,
* return now before adding ourselves to the rbtree. We let the
* current bottom-half handle any pending wakeups and instead
* try and get out of the way quickly.
@@ -429,7 +432,8 @@ static bool __intel_engine_add_wait(struct intel_engine_cs *engine,
while (*p) {
parent = *p;
if (wait->seqno == to_wait(parent)->seqno) {
- /* We have multiple waiters on the same seqno, select
+ /*
+ * We have multiple waiters on the same seqno, select
* the highest priority task (that with the smallest
* task->prio) to serve as the bottom-half for this
* group.
@@ -457,7 +461,8 @@ static bool __intel_engine_add_wait(struct intel_engine_cs *engine,
if (first) {
spin_lock(&b->irq_lock);
b->irq_wait = wait;
- /* After assigning ourselves as the new bottom-half, we must
+ /*
+ * After assigning ourselves as the new bottom-half, we must
* perform a cursory check to prevent a missed interrupt.
* Either we miss the interrupt whilst programming the hardware,
* or if there was a previous waiter (for a later seqno) they
@@ -467,10 +472,19 @@ static bool __intel_engine_add_wait(struct intel_engine_cs *engine,
*/
armed = __intel_breadcrumbs_enable_irq(b);
spin_unlock(&b->irq_lock);
+
+ /*
+ * On taking over the principle irq waiter slot, we
+ * have to be careful in case we miss the interrupt for
+ * our breadcrumb and so require a full coherency barrier
+ * just in case.
+ */
+ set_bit(ENGINE_IRQ_BREADCRUMB, &engine->irq_posted);
}
if (completed) {
- /* Advance the bottom-half (b->irq_wait) before we wake up
+ /*
+ * Advance the bottom-half (b->irq_wait) before we wake up
* the waiters who may scribble over their intel_wait
* just as the interrupt handler is dereferencing it via
* b->irq_wait.
--
2.18.0
More information about the Intel-gfx-trybot
mailing list