[Intel-gfx] TIMESTAMP register

Eric Anholt eric at anholt.net
Mon Apr 30 23:11:09 CEST 2012


On Wed, 18 Apr 2012 00:51:42 +0100, Chris Wilson <chris at chris-wilson.co.uk> wrote:
> On Tue, 17 Apr 2012 16:27:45 -0700, Ben Widawsky <ben at bwidawsk.net> wrote:
> > On Tue, 17 Apr 2012 23:04:18 +0200
> > Daniel Vetter <daniel at ffwll.ch> wrote:
> > 
> > > On Tue, Apr 17, 2012 at 08:34:11PM +0000, Lawrynowicz, Jacek wrote:
> > > > ARB_timer_query allows client read TIMESTAMP both asynchronously
> > > > and synchronously.  The former can be implemented as you said
> > > > but the latter requires support from the KMD.  This must be a
> > > > simple MMIO read as this is the only way to report "current" GPU
> > > > time.  Implementing synchronous TIMESTAMP query using pipe
> > > > control would render the third example from ARB_timer_query spec
> > > > useless.
> > > 
> > > Ok, I've looked like a dofus again, but now I've read the spec and we
> > > indeed seem to need a synchronous readout of the TIMESTAMP register. I
> > > guess a new register will do, together with some fixed-point integer that
> > > tells userspace how to convert it to nanoseconds.
> > > -Daniel
> > 
> > I've not read the spec, but synchronous and "current" doesn't mean the
> > exact same thing to me. I assume the spec doesn't allow getting the
> > value in a batch and then just waiting for rendering to complete?
> 
> The spec stipulates that the client is able to query the timestamp
> counter synchronously from within the render stream (ala PIPE_CONTROL)
> and query the current timestamp asynchronously. The spec also explicitly
> allows for those two clocks to be different (though close enough for the
> user to not care). Therefore you need only use the nanosecond monotonic
> clock for the asynchronous query and apply an offset to the GPU timestamp
> when converting that from ticks to nanoseconds. My bet is that
> clock_gettime() is going to beat even ioctl(QUERY_COUNTER), not least
> because TIMESTAMP (being a per-ring register) is going to require
> the forcewake dance.
> -Chris

I think you're referring to this:

   (11) Can the GL implementation use different clocks to implement the
        TIME_ELAPSED and TIMESTAMP queries?

   RESOLVED: Yes, the implemenation can use different internal clocks to
   implement TIME_ELAPSED and TIMESTAMP. If different clocks are
   used it is possible there is a slight discrepancy when comparing queries
   made from TIME_ELAPSED and TIMESTAMP; they may have slight
   differences when both are used to measure the same sequence. However, this
   is unlikely to affect real applications since comparing the two queries is
   not expected to be useful.

But that's about TIME_ELAPSED vs TIMESTAMP, not GPU-side TIMESTAMP vs
CPU-side TIMESTAMP.  Having those two not be the same clock seems crazy.

Plus, as a bonus, if we get a general read-a-register ioctl, that means
we could do a non-root gpu top.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: not available
URL: <http://lists.freedesktop.org/archives/intel-gfx/attachments/20120430/e6985d8d/attachment.sig>


More information about the Intel-gfx mailing list