[PATCH] glretrace: Use GL_TIMESTAMP (if available) for CPU profiling.

José Fonseca jose.r.fonseca at gmail.com
Sat Dec 8 04:55:40 PST 2012


Ok. I think I understand now what you're trying to do.

But you should be using glGetInteger(GL_TIMESTAMP) to get the current
time, as GL_TIMESTAMP queries will give you not the current time, but
the time when the draw call is processed by the GPU (some time later).
And I want to enabling visualizing the lag between CPU and GPU start
times in the GUI (though I do not want them to drift).

I pushed a commit based on yours that does this to
https://github.com/apitrace/apitrace/tree/timestamp .

I only tested on softpipe and need to test on real HW before commit.

Let me know if it works for you.

Jose

On Thu, Nov 15, 2012 at 9:38 PM, Carl Worth <cworth at cworth.org> wrote:
> José Fonseca <jose.r.fonseca at gmail.com> writes:
>> Won't the CPU and GPU timings match 1:1 the same then?
>
> Not really. The only place they will match (as the patch is currently
> written) is on the start time of drawing calls. And that's a part of the
> patch where we may want to do something different.
>
>> The CPU start/end times are useful information. If they are always out
>> of sync with the GPU then instead of discarding them ...
>
> I'm not discarding any times. I'm still recording CPU start times at the
> same point as before. The only difference is that I'm using a reliable
> clock to record those times.
>
>>                                                  ... we should update
>> the GUI so that CPU and GPU time axis can vary independent.
>
> I'm not sure what you mean here, nor how it could be useful. There
> doesn't exist some magic third clock by which you can correlate these
> two unrelated clocks.
>
>> Also, why are GPU counters doomed to be always be unreliable?
>
> Just the opposite, in fact. It's the GPU counter that is reliable. With
> current Intel hardware, the GPU counter gives us a nice, monotonic,
> fixed-frequency clock. It's the CPU clock that changes frequency, etc.
>
> My question is why would we ever use independent clocks subject to
> drift? The current code attempts to re-synchronize the clocks once per
> frame. So there's one call per frame where things might actually be
> consistent. But all bets are off after that. And in practice, the drift
> I saw was obvious—GPU calls were showing up as starting before the
> corresponding CPU call had even started. That drift was large enough to
> make me doubt any other CPU-GPU timing difference anywhere in the GUI.
>
> -Carl
>
> --
> carl.d.worth at intel.com


More information about the apitrace mailing list