[Intel-gfx] [PATCH 1/2] drm/i915: compute wait_ioctl timeout correctly
Dave Gordon
david.s.gordon at intel.com
Fri Nov 28 14:46:27 CET 2014
On 28/11/14 09:29, Daniel Vetter wrote:
> We've lost the +1 required for correct timeouts in
>
> commit 5ed0bdf21a85d78e04f89f15ccf227562177cbd9
> Author: Thomas Gleixner <tglx at linutronix.de>
> Date: Wed Jul 16 21:05:06 2014 +0000
>
> drm: i915: Use nsec based interfaces
>
> Use ktime_get_raw_ns() and get rid of the back and forth timespec
> conversions.
>
> Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
> Acked-by: Daniel Vetter <daniel.vetter at ffwll.ch>
> Signed-off-by: John Stultz <john.stultz at linaro.org>
>
> So fix this up by reinstating our handrolled _timeout function. While
> at it bother with handling MAX_JIFFIES.
>
> Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=82749
> Cc: Thomas Gleixner <tglx at linutronix.de>
> Cc: John Stultz <john.stultz at linaro.org>
> Signed-off-by: Daniel Vetter <daniel.vetter at intel.com>
> ---
> drivers/gpu/drm/i915/i915_drv.h | 10 ++++++++++
> drivers/gpu/drm/i915/i915_gem.c | 3 ++-
> 2 files changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
> index 02b3cb32c8a6..caae337c0199 100644
> --- a/drivers/gpu/drm/i915/i915_drv.h
> +++ b/drivers/gpu/drm/i915/i915_drv.h
> @@ -3030,6 +3030,16 @@ static inline unsigned long msecs_to_jiffies_timeout(const unsigned int m)
> return min_t(unsigned long, MAX_JIFFY_OFFSET, j + 1);
> }
>
> +static inline unsigned long nsecs_to_jiffies_timeout(const u64 m)
> +{
> + unsigned long j = nsecs_to_jiffies(m);
nsecs_to_jiffies() may be (relatively) expensive (mul/div/etc), so I'd
be inclined to move the call until after the test below. It would be
nice if the test turned into a single comparison, since the RHS is a
constant for a given kernel build; but it looks like jiffies_to_usecs()
isn't expanded inline, since it's in time.c :-( In which case swapping
the lines around may also help the compiler keep 'j' live.
> + if (m > (u64)jiffies_to_usecs(MAX_JIFFY_OFFSET) * 1000)
I think there's a problem with this line anyway. In kernel/time/time.c:
// Warning! Result type may be narrower than parameter type - DSG
unsigned int jiffies_to_usecs(const unsigned long j)
{
#if HZ <= USEC_PER_SEC && !(USEC_PER_SEC % HZ)
return (USEC_PER_SEC / HZ) * j;
#elif HZ > USEC_PER_SEC && !(HZ % USEC_PER_SEC)
return (j + (HZ / USEC_PER_SEC) - 1)/(HZ / USEC_PER_SEC);
#else
# if BITS_PER_LONG == 32
return (HZ_TO_USEC_MUL32 * j) >> HZ_TO_USEC_SHR32;
# else
return (j * HZ_TO_USEC_NUM) / HZ_TO_USEC_DEN;
# endif
#endif
}
Also, include/linux/jiffies.h:
#define MAX_JIFFY_OFFSET ((LONG_MAX >> 1)-1)
and include/linux/kernel.h:
#define LONG_MAX ((long)(~0UL>>1))
So, on a 64-bit build we'll have LONG_MAX == 0x7fff_ffff_ffff_ffff and
MAX_JIFFY_OFFSET == 0x3fff_ffff_ffff_fffe. Multiplying that by 1000
gives an answer that doesn't fit in an unsigned int!
Even on a 32-bit build (where LONG_MAX == 0x7fff_ffff and
MAX_JIFFY_OFFSET == 0x3fff_fffe) MAX_JIFFY_OFFSET can't be multiplied by
any typical value of HZ (50, 60, 1000) without overflow!
I think the only way to get this right, give the somewhat broken nature
of the kernel function signatures and its lack of a u64 jiffies-to-nsecs
function, is to convert ONE jiffy to (unsigned int) usecs,
then widen to u64 before converting to nsecs and using that for the rest
of the calculations.
.Dave.
> + return MAX_JIFFY_OFFSET;
> +
> + return min_t(unsigned long, MAX_JIFFY_OFFSET, j + 1);
> +}
More information about the Intel-gfx
mailing list