[Intel-gfx] [PATH] Correct GPU timestamp read

Chris Wilson chris at chris-wilson.co.uk
Thu Sep 25 14:26:26 CEST 2014


On Mon, Sep 22, 2014 at 06:22:53PM +0200, Jacek Danecki wrote:
> Current implementation of reading GPU timestamp is broken.
> It returns lower 32 bits shifted by 32 bits (XXXXXXXX00000000 instead of YYYYYYYYXXXXXXXX).
> Below change is adding possibility to read hi part of that register separately.
> 
> Signed-off-by: Jacek Danecki jacek.danecki at intel.com

The problem is that beignet already works around the broken hw read
whereas mesa does not. If we apply the fix in the kernel we break the
one user of it in beignet but fix all the existing users of mesa.

The userspace workaround is effectively:

  u64 v = reg_read(TIMESTAMP);
  if (lower_32_bits(v) == 0) {
	  v >>= 32;
	  v |= reg_read(TIMESTAMP + 4) << 32;
  }

Our ABI says read 8 bytes from this location. I am not sure if says
anything about what to do if the hardware is broken, or what that value
means. Already that value depends upon generation and architecture, e.g.
on x86-32 this is done as 2 readl, but on x86-64 a single readq.

The question comes down to fix mesa and break beignet, or do nothing.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre



More information about the Intel-gfx mailing list