[Intel-gfx] [RFC 00/12] Per client engine busyness

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Mon Mar 9 23:30:08 UTC 2020


On 09/03/2020 22:02, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 18:31:17)
>> From: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>>
>> Another re-spin of the per-client engine busyness series. Highlights from this
>> version:
>>
>>   * Different way of tracking runtime of exited/unreachable context. This time
>>     round I accumulate those per context/client and engine class, but active
>>     contexts are kept in a list and tallied on sysfs reads.
>>   * I had to do a small tweak in the engine release code since I needed the
>>     GEM context for a bit longer. (So I can accumulate the intel_context runtime
>>     into it as it is getting freed, because context complete can be late.)
>>   * PPHWSP method is back and even comes first in the series this time. It still
>>     can't show the currently running workloads but the software tracking method
>>     suffers from the CSB processing delay with high frequency and very short
>>     batches.
> 
> I bet it's ksoftirqd, but this could be quite problematic for us.
> gem_exec_nop/foo? I wonder if this also ties into how much harder it is
> to saturate the GPU with nops from userspace than it is from the kernel.

At least disappointing, or even problematic yes. I had a cunning plan 
though, to report back max(sw_runtimetime, pphwsp_runtime). Apart from 
it not being that cunning when things start to systematically drift. 
Then it effectively becomes pphwsp runtime. Oh well, don't know at the 
moment, might have to live with pphwsp only.

Regards,

Tvrtko


More information about the Intel-gfx mailing list