[Intel-gfx] [PATCH] drm/i915: compute wait_ioctl timeout correctly
Chris Wilson
chris at chris-wilson.co.uk
Tue Dec 2 08:35:06 PST 2014
On Tue, Dec 02, 2014 at 04:36:22PM +0100, Daniel Vetter wrote:
> We've lost the +1 required for correct timeouts in
>
> commit 5ed0bdf21a85d78e04f89f15ccf227562177cbd9
> Author: Thomas Gleixner <tglx at linutronix.de>
> Date: Wed Jul 16 21:05:06 2014 +0000
>
> drm: i915: Use nsec based interfaces
>
> Use ktime_get_raw_ns() and get rid of the back and forth timespec
> conversions.
>
> Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
> Acked-by: Daniel Vetter <daniel.vetter at ffwll.ch>
> Signed-off-by: John Stultz <john.stultz at linaro.org>
>
> So fix this up by reinstating our handrolled _timeout function. While
> at it bother with handling MAX_JIFFIES.
>
> v2: Convert to usecs (we don't care about the accuracy anyway) first
> to avoid overflow issues Dave Gordon spotted.
>
> v3: Drop the explicit MAX_JIFFY_OFFSET check, usecs_to_jiffies should
> take care of that already. It might be a bit too enthusiastic about it
> though.
>
> Cc: Dave Gordon <david.s.gordon at intel.com>
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=82749
> Cc: Thomas Gleixner <tglx at linutronix.de>
> Cc: John Stultz <john.stultz at linaro.org>
> Signed-off-by: Daniel Vetter <daniel.vetter at intel.com>
> ---
> drivers/gpu/drm/i915/i915_drv.h | 8 ++++++++
> drivers/gpu/drm/i915/i915_gem.c | 3 ++-
> 2 files changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 049482f5d9ac..4ea14a8c31f7 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -3097,6 +3097,14 @@ static inline unsigned long msecs_to_jiffies_timeout(const unsigned int m)
> return min_t(unsigned long, MAX_JIFFY_OFFSET, j + 1);
> }
>
> +static inline unsigned long nsecs_to_jiffies_timeout(const u64 m)
> +{
> + u64 usecs = div_u64(m + 999, 1000);
> + unsigned long j = usecs_to_jiffies(usecs);
> +
> + return min_t(unsigned long, MAX_JIFFY_OFFSET, j + 1);
Or more concisely and review friendly:
static inline unsigned long nsecs_to_jiffies_timeout(const u64 n)
{
return min_t(u64, MAX_JIFFY_OFFSET, nsecs_to_jiffies64(n) + 1);
}
-Chris
--
Chris Wilson, Intel Open Source Technology Centre
More information about the Intel-gfx
mailing list