[Intel-gfx] [RFC 08/12] drm/i915: Expose per-engine client busyness
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Tue Mar 10 20:04:23 UTC 2020
On 10/03/2020 18:32, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2020-03-09 18:31:25)
>> +static ssize_t
>> +show_client_busy(struct device *kdev, struct device_attribute *attr, char *buf)
>> +{
>> + struct i915_engine_busy_attribute *i915_attr =
>> + container_of(attr, typeof(*i915_attr), attr);
>> + unsigned int class = i915_attr->engine_class;
>> + struct i915_drm_client *client = i915_attr->client;
>> + u64 total = atomic64_read(&client->past_runtime[class]);
>> + struct list_head *list = &client->ctx_list;
>> + struct i915_gem_context *ctx;
>> +
>> + rcu_read_lock();
>> + list_for_each_entry_rcu(ctx, list, client_link) {
>> + total += atomic64_read(&ctx->past_runtime[class]);
>> + total += pphwsp_busy_add(ctx, class);
>> + }
>> + rcu_read_unlock();
>> +
>> + total *= RUNTIME_INFO(i915_attr->i915)->cs_timestamp_period_ns;
>
> Planning early retirement? In 600 years, they'll have forgotten how to
> email ;)
Shruggety shrug. :) I am guessing you would prefer both internal
representations (sw and pphwsp runtimes) to be consistently in
nanoseconds? I thought why multiply at various places when once at the
readout time is enough.
And I should mention again how I am not sure at the moment how to meld
the two stats into one more "perfect" output.
Regards,
Tvrtko
More information about the Intel-gfx
mailing list