[Intel-gfx] [PATH] Correct GPU timestamp read

Jacek Danecki jacek.danecki at intel.com
Tue Sep 23 19:12:31 CEST 2014


On 09/23/14 10:37, Daniel Vetter wrote:
> On Mon, Sep 22, 2014 at 06:22:53PM +0200, Jacek Danecki wrote:
>> Current implementation of reading GPU timestamp is broken.
>> It returns lower 32 bits shifted by 32 bits (XXXXXXXX00000000 instead of YYYYYYYYXXXXXXXX).
>> Below change is adding possibility to read hi part of that register separately.
>>
>> Signed-off-by: Jacek Danecki jacek.danecki at intel.com
> 
> Needs to come with corresponding userspace using this.

Beignet can use this in below function. They are using only 32 bits from timestamp register, because of kernel bug.

* IVB and HSW's result MUST shift in x86_64 system */
static uint64_t
intel_gpgpu_read_ts_reg_gen7(drm_intel_bufmgr *bufmgr)
{
  uint64_t result = 0;
  drm_intel_reg_read(bufmgr, TIMESTAMP_ADDR, &result);
  /* In x86_64 system, the low 32bits of timestamp count are stored in the high 32 bits of
     result which got from drm_intel_reg_read, and 32-35 bits are lost; but match bspec in
     i386 system. It seems the kernel readq bug. So shift 32 bit in x86_64, and only remain
     32 bits data in i386.
  */
#ifdef __i386__
  return result & 0x0ffffffff;
#else
  return result >> 32;
#endif  /* __i386__  */
}


-- 
jacek



More information about the Intel-gfx mailing list