[Intel-gfx] [PATCH v3] drm/i915: handle uncore spinlock when not available

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Wed Oct 25 10:25:04 UTC 2023


On 25/10/2023 11:18, Tvrtko Ursulin wrote:
> 
> On 23/10/2023 11:33, Luca Coelho wrote:
>> The uncore code may not always be available (e.g. when we build the
>> display code with Xe), so we can't always rely on having the uncore's
>> spinlock.
>>
>> To handle this, split the spin_lock/unlock_irqsave/restore() into
>> spin_lock/unlock() followed by a call to local_irq_save/restore() and
>> create wrapper functions for locking and unlocking the uncore's
>> spinlock.  In these functions, we have a condition check and only
>> actually try to lock/unlock the spinlock when I915 is defined, and
>> thus uncore is available.
>>
>> This keeps the ifdefs contained in these new functions and all such
>> logic inside the display code.
>>
>> Signed-off-by: Luca Coelho <luciano.coelho at intel.com>
>> ---
>>
>> In v2:
>>
>>     * Renamed uncore_spin_*() to intel_spin_*()
>>     * Corrected the order: save, lock, unlock, restore
>>
>> In v3:
>>
>>     * Undid the change to pass drm_i915_private instead of the lock
>>       itself, since we would have to include i915_drv.h and that pulls
>>       in a truckload of other includes.
>>
>>   drivers/gpu/drm/i915/display/intel_display.h | 20 ++++++++++++++++++++
>>   drivers/gpu/drm/i915/display/intel_vblank.c  | 19 ++++++++++++-------
>>   2 files changed, 32 insertions(+), 7 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/display/intel_display.h 
>> b/drivers/gpu/drm/i915/display/intel_display.h
>> index 0e5dffe8f018..2a33fcc8ce68 100644
>> --- a/drivers/gpu/drm/i915/display/intel_display.h
>> +++ b/drivers/gpu/drm/i915/display/intel_display.h
>> @@ -559,4 +559,24 @@ bool assert_port_valid(struct drm_i915_private 
>> *i915, enum port port);
>>   bool intel_scanout_needs_vtd_wa(struct drm_i915_private *i915);
>> +/*
>> + * The uncore version of the spin lock functions is used to decide
>> + * whether we need to lock the uncore lock or not.  This is only
>> + * needed in i915, not in Xe.  Keep the decision-making centralized
>> + * here.
>> + */
>> +static inline void intel_spin_lock(spinlock_t *lock)
>> +{
>> +#ifdef I915
>> +    spin_lock(lock);
>> +#endif
>> +}
>> +
>> +static inline void intel_spin_unlock(spinlock_t *lock)
>> +{
>> +#ifdef I915
>> +    spin_unlock(lock);
>> +#endif
>> +}
>> +
>>   #endif
>> diff --git a/drivers/gpu/drm/i915/display/intel_vblank.c 
>> b/drivers/gpu/drm/i915/display/intel_vblank.c
>> index 2cec2abf9746..9b482d648762 100644
>> --- a/drivers/gpu/drm/i915/display/intel_vblank.c
>> +++ b/drivers/gpu/drm/i915/display/intel_vblank.c
>> @@ -306,7 +306,8 @@ static bool i915_get_crtc_scanoutpos(struct 
>> drm_crtc *_crtc,
>>        * register reads, potentially with preemption disabled, so the
>>        * following code must not block on uncore.lock.
>>        */
>> -    spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
>> +    local_irq_save(irqflags);
> 
> Does Xe needs interrupts off?
> 
>> +    intel_spin_lock(&dev_priv->uncore.lock);
> 
> My 2p/c is that intel_spin_lock as a name does not work when it is 
> specifically about the single and specific (uncore) lock. One cannot 
> call intel_spin_lock(some->other->lock) etc.
> 
> Perhaps call it i915_uncore_lock_irqsave(i915, flags) so it is clear it 
> is only for i915.

Or, if the implementation will later gain the #else block for Xe, 
perhaps intel_uncore_lock_...?

Regards,

Tvrtko

>>       /* preempt_disable_rt() should go right here in PREEMPT_RT 
>> patchset. */
>> @@ -374,7 +375,8 @@ static bool i915_get_crtc_scanoutpos(struct 
>> drm_crtc *_crtc,
>>       /* preempt_enable_rt() should go right here in PREEMPT_RT 
>> patchset. */
>> -    spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
>> +    intel_spin_unlock(&dev_priv->uncore.lock);
>> +    local_irq_restore(irqflags);
>>       /*
>>        * While in vblank, position will be negative
>> @@ -412,9 +414,13 @@ int intel_get_crtc_scanline(struct intel_crtc *crtc)
>>       unsigned long irqflags;
>>       int position;
>> -    spin_lock_irqsave(&dev_priv->uncore.lock, irqflags);
>> +    local_irq_save(irqflags);
>> +    intel_spin_lock(&dev_priv->uncore.lock);
>> +
>>       position = __intel_get_crtc_scanline(crtc);
>> -    spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
>> +
>> +    intel_spin_unlock(&dev_priv->uncore.lock);
>> +    local_irq_restore(irqflags);
>>       return position;
>>   }
>> @@ -537,7 +543,7 @@ void intel_crtc_update_active_timings(const struct 
>> intel_crtc_state *crtc_state,
>>        * Need to audit everything to make sure it's safe.
>>        */
>>       spin_lock_irqsave(&i915->drm.vblank_time_lock, irqflags);
>> -    spin_lock(&i915->uncore.lock);
>> +    intel_spin_lock(&i915->uncore.lock);
>>       drm_calc_timestamping_constants(&crtc->base, &adjusted_mode);
>> @@ -546,7 +552,6 @@ void intel_crtc_update_active_timings(const struct 
>> intel_crtc_state *crtc_state,
>>       crtc->mode_flags = mode_flags;
>>       crtc->scanline_offset = intel_crtc_scanline_offset(crtc_state);
>> -
>> -    spin_unlock(&i915->uncore.lock);
>> +    intel_spin_unlock(&i915->uncore.lock);
>>       spin_unlock_irqrestore(&i915->drm.vblank_time_lock, irqflags);
>>   }


More information about the Intel-gfx mailing list