[Intel-gfx] [PATCH 05/17] drm/i915: Support for GuC interrupts

Goel, Akash akash.goel at intel.com
Mon Jul 11 14:20:13 UTC 2016



On 7/11/2016 7:13 PM, Tvrtko Ursulin wrote:
>
> On 11/07/16 14:38, Goel, Akash wrote:
>> On 7/11/2016 6:53 PM, Tvrtko Ursulin wrote:
>>>
>>> On 11/07/16 14:15, Goel, Akash wrote:
>>>> On 7/11/2016 4:00 PM, Tvrtko Ursulin wrote:
>>>>>
>>
>>>>>> +static void gen9_guc_irq_handler(struct drm_i915_private *dev_priv,
>>>>>> u32 gt_iir)
>>>>>> +{
>>>>>> +    if (gt_iir & GEN9_GUC_TO_HOST_INT_EVENT) {
>>>>>> +        spin_lock(&dev_priv->irq_lock);
>>>>>> +        if (dev_priv->guc.interrupts_enabled) {
>>>>>> +            /* Sample the log buffer flush related bits & clear them
>>>>>> +             * out now itself from the message identity register to
>>>>>> +             * minimize the probability of losing a flush interrupt,
>>>>>> +             * when there are back to back flush interrupts.
>>>>>> +             * There can be a new flush interrupt, for different log
>>>>>> +             * buffer type (like for ISR), whilst Host is handling
>>>>>> +             * one (for DPC). Since same bit is used in message
>>>>>> +             * register for ISR & DPC, it could happen that GuC
>>>>>> +             * sets the bit for 2nd interrupt but Host clears out
>>>>>> +             * the bit on handling the 1st interrupt.
>>>>>> +             */
>>>>>> +            u32 msg = I915_READ(SOFT_SCRATCH(15)) &
>>>>>> +                    (GUC2HOST_MSG_CRASH_DUMP_POSTED |
>>>>>> +                     GUC2HOST_MSG_FLUSH_LOG_BUFFER);
>>>>>> +            if (msg) {
>>>>>> +                /* Clear the message bits that are handled */
>>>>>> +                I915_WRITE(SOFT_SCRATCH(15),
>>>>>> +                    I915_READ(SOFT_SCRATCH(15)) & ~msg);
>>>>>> +
>>>>>> +                /* Handle flush interrupt event in bottom half */
>>>>>> +                queue_work(dev_priv->wq,
>>>>>> &dev_priv->guc.events_work);
>>>>>
>>>>> Since the later patch is changing this to use a thread, since you have
>>>>> established worker is too slow - especially the shared one - I would
>>>>> really recommend you start with the kthread straight away. Not have
>>>>> the
>>>>> worker for a while in the same series and then later change it to a
>>>>> thread.
>>>>>
>>>> Actually it won't be appropriate to say that shared worker thread is
>>>> too
>>>> slow, but having a dedicated kthread definitely helps.
>>>>
>>>> I kept the kthread patch at the last so that as per the response,
>>>> review comments can drop it also.
>>>
>>> I think it should only be one implementation in the patch series. If we
>>> agreed on a kthread make it so from the start.
>>>
>> Agree but actually right now, added the kthread patch more as a RFC and
>> presumed this won't be the final version of the series.
>> Will do the needful, as per the review comments, in the next version.
>
> Ack.
>
>>> And describe in the commit message why it was selected etc.
>>>
>>>>>> +            }
>>>>>> +        }
>>>>>> +        spin_unlock(&dev_priv->irq_lock);
>>>>>
>>>>> Why does the above needs to be done under the irq_lock ?
>>>>>
>>>> Using the irq_lock for 'guc.interrupts_enabled', especially useful
>>>> while disabling the interrupt.
>>>
>>> Why? I don't see how it gains you anything and so it seems preferable
>>> not to hold it over mmio accesses.
>>>
>> Yes not needed for the mmio access part.
>> Just needed for the inspection of 'guc.interrupts_enabled' value.
>> Will reorder the code.
>
> You don't need it just for reading that value, you can just drop it.
>

Its not strictly needed as its a mere read. But as per my limited
understanding, without the spinlock (which provides an implicit barrier
also) ISR might miss the reset of 'interrupts_enabled' flag, from a
thread on other CPU, and queue the new work. The update will be
visible eventually though. And same applies to the case when
'interrupts_enabled' flag is set from other CPU.
Good practice to use locks for accessing shared variables ?.

Best regards
Akash


> Regards,
>
> Tvrtko
>
>


More information about the Intel-gfx mailing list