[Intel-gfx] [PATCH 13/13] drm/i915: Cache last IRQ seqno to reduce IRQ overhead

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Mon Dec 14 04:52:18 PST 2015


On 14/12/15 11:58, John Harrison wrote:
> On 11/12/2015 14:28, Tvrtko Ursulin wrote:
>> On 11/12/15 13:12, John.C.Harrison at Intel.com wrote:
>>> From: John Harrison <John.C.Harrison at Intel.com>
>>>
>>> The notify function can be called many times without the seqno
>>> changing. A large number of duplicates are to prevent races due to the
>>> requirement of not enabling interrupts until requested. However, when
>>> interrupts are enabled the IRQ handle can be called multiple times
>>> without the ring's seqno value changing. This patch reduces the
>>> overhead of these extra calls by caching the last processed seqno
>>> value and early exiting if it has not changed.
>>>
>>> v3: New patch for series.
>>>
>>> For: VIZ-5190
>>> Signed-off-by: John Harrison <John.C.Harrison at Intel.com>
>>> ---
>>>   drivers/gpu/drm/i915/i915_gem.c         | 14 +++++++++++---
>>>   drivers/gpu/drm/i915/intel_ringbuffer.h |  1 +
>>>   2 files changed, 12 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_gem.c
>>> b/drivers/gpu/drm/i915/i915_gem.c
>>> index 279d79f..3c88678 100644
>>> --- a/drivers/gpu/drm/i915/i915_gem.c
>>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>>> @@ -2457,6 +2457,8 @@ i915_gem_init_seqno(struct drm_device *dev, u32
>>> seqno)
>>>
>>>           for (j = 0; j < ARRAY_SIZE(ring->semaphore.sync_seqno); j++)
>>>               ring->semaphore.sync_seqno[j] = 0;
>>> +
>>> +        ring->last_irq_seqno = 0;
>>>       }
>>>
>>>       return 0;
>>> @@ -2788,11 +2790,14 @@ void i915_gem_request_notify(struct
>>> intel_engine_cs *ring, bool fence_locked)
>>>           return;
>>>       }
>>>
>>> -    if (!fence_locked)
>>> -        spin_lock_irqsave(&ring->fence_lock, flags);
>>> -
>>>       seqno = ring->get_seqno(ring, false);
>>>       trace_i915_gem_request_notify(ring, seqno);
>>> +    if (seqno == ring->last_irq_seqno)
>>> +        return;
>>> +    ring->last_irq_seqno = seqno;
>>
>> Hmmm.. do you want to make the check "seqno <= ring->last_irq_seqno" ?
>>
>> Is there a possibility for some weird timing or caching issue where
>> two callers get in and last_irq_seqno goes backwards? Not sure that it
>> would cause a problem, but pattern is unusual and hard to understand
>> for me.
> The check is simply to prevent repeat processing of identical seqno
> values. The 'last_' value is never used for anything more complicated.
> If there is a very rare race condition where the repeat processing can
> still happen, it doesn't really matter too much.
>
>> Also check and the assignment would need to be under the spinlock I
>> think.
>
> The whole point is to not grab the spinlock if there is no work to do.
> Hence the seqno read and test must be done first. The assignment could
> potentially be done after the lock but if two different threads have
> made it that far concurrently then it doesn't really matter who does the
> write first. Most likely they are both processing the same seqno and in
> the really rare case of two concurrent threads actually reading two
> different (and both new) seqno values then there is no guarantee about
> which will take the lock first. So you are into the above situation of
> it doesn't really matter if there is then a third time around later that
> finds an 'incorrect' last value and goes through the processing sequence
> but with no work to do.

I think it would be good to put that in the comment then. :)

That you don't care about multiple notify processing running if the 
timing is right, or that you don't care if ring->last_irq_seqno does not 
reflect the last processed seqno. Etc.

Regards,

Tvrtko


More information about the Intel-gfx mailing list