[Intel-gfx] [PATCH 02/12] drm/i915: Do not wait atomically for display clocks
Dave Gordon
david.s.gordon at intel.com
Tue Feb 2 14:08:44 UTC 2016
On 02/02/16 12:00, Chris Wilson wrote:
> On Tue, Feb 02, 2016 at 11:06:20AM +0000, Tvrtko Ursulin wrote:
>> From: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>>
>> Looks like this code does not need to wait atomically since it
>> otherwise takes the mutex.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>> Cc: Ville Syrjälä <ville.syrjala at linux.intel.com>
>> Reviewed-by: Ville Syrjälä <ville.syrjala at linux.intel.com>
>> ---
>> drivers/gpu/drm/i915/intel_display.c | 8 ++++----
>> 1 file changed, 4 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
>> index 304fc9637026..a7530cf612d7 100644
>> --- a/drivers/gpu/drm/i915/intel_display.c
>> +++ b/drivers/gpu/drm/i915/intel_display.c
>> @@ -9753,8 +9753,8 @@ static void broadwell_set_cdclk(struct drm_device *dev, int cdclk)
>> val |= LCPLL_CD_SOURCE_FCLK;
>> I915_WRITE(LCPLL_CTL, val);
>>
>> - if (wait_for_atomic_us(I915_READ(LCPLL_CTL) &
>> - LCPLL_CD_SOURCE_FCLK_DONE, 1))
>> + if (wait_for_us(I915_READ(LCPLL_CTL) &
>> + LCPLL_CD_SOURCE_FCLK_DONE, 1))
>
> Thinking about wait_for_seconds and friends from before, does this read
> better as
>
> if (wait_for(I915_READ(LCPLL_CTL) & LCPLL_CD_SOURCE_FCLK_DONE,
> wait_for_microseconds(1))
> ?
> -Chris
No, not really, because it confuses a function (or macro) that tests for
a condition with one that converts between different units of time, yet
they both have names beginning wait_for...
And people might expect a function called wait_for_microseconds() to
actually wait for some number of microseconds!
There's an ambiguity in English anyway e.g. "wait for a bus" (event) vs
"wait for 10 minutes" (duration). But there's no need to propagate
natural-language foolishness into code.
What we're really trying to express is
({ while (!condition && !timedout)
delay(interval)
resultis timedout; })
but there's still a question about, why should the granularity of the
delay be related to the precision of the timespec? With these patches,
we've got a situation where if the timeout is specified in us, the delay
between rechecking the condition is 1us, but if the timeout is in ms,
there's a 1ms recheck interval.
.Dave.
More information about the Intel-gfx
mailing list