[Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats printing
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Wed Sep 20 14:22:49 UTC 2023
On 24/08/2023 12:35, Upadhyay, Tejas wrote:
>> -----Original Message-----
>> From: Intel-gfx <intel-gfx-bounces at lists.freedesktop.org> On Behalf Of Tvrtko
>> Ursulin
>> Sent: Friday, July 7, 2023 6:32 PM
>> To: Intel-gfx at lists.freedesktop.org; dri-devel at lists.freedesktop.org
>> Subject: [Intel-gfx] [PATCH 5/5] drm/i915: Implement fdinfo memory stats
>> printing
>>
>> From: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>>
>> Use the newly added drm_print_memory_stats helper to show memory
>> utilisation of our objects in drm/driver specific fdinfo output.
>>
>> To collect the stats we walk the per memory regions object lists and
>> accumulate object size into the respective drm_memory_stats categories.
>>
>> Objects with multiple possible placements are reported in multiple regions for
>> total and shared sizes, while other categories are counted only for the
>> currently active region.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
>> Cc: Aravind Iddamsetty <aravind.iddamsetty at intel.com>
>> Cc: Rob Clark <robdclark at gmail.com>
>> ---
>> drivers/gpu/drm/i915/i915_drm_client.c | 85 ++++++++++++++++++++++++++
>> 1 file changed, 85 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/i915/i915_drm_client.c
>> b/drivers/gpu/drm/i915/i915_drm_client.c
>> index ffccb6239789..5c77d6987d90 100644
>> --- a/drivers/gpu/drm/i915/i915_drm_client.c
>> +++ b/drivers/gpu/drm/i915/i915_drm_client.c
>> @@ -45,6 +45,89 @@ void __i915_drm_client_free(struct kref *kref) }
>>
>> #ifdef CONFIG_PROC_FS
>> +static void
>> +obj_meminfo(struct drm_i915_gem_object *obj,
>> + struct drm_memory_stats stats[INTEL_REGION_UNKNOWN]) {
>> + struct intel_memory_region *mr;
>> + u64 sz = obj->base.size;
>> + enum intel_region_id id;
>> + unsigned int i;
>> +
>> + /* Attribute size and shared to all possible memory regions. */
>> + for (i = 0; i < obj->mm.n_placements; i++) {
>> + mr = obj->mm.placements[i];
>> + id = mr->id;
>> +
>> + if (obj->base.handle_count > 1)
>> + stats[id].shared += sz;
>> + else
>> + stats[id].private += sz;
>> + }
>> +
>> + /* Attribute other categories to only the current region. */
>> + mr = obj->mm.region;
>> + if (mr)
>> + id = mr->id;
>> + else
>> + id = INTEL_REGION_SMEM;
>> +
>> + if (!obj->mm.n_placements) {
>> + if (obj->base.handle_count > 1)
>> + stats[id].shared += sz;
>> + else
>> + stats[id].private += sz;
>> + }
>> +
>> + if (i915_gem_object_has_pages(obj)) {
>> + stats[id].resident += sz;
>> +
>> + if (!dma_resv_test_signaled(obj->base.resv,
>> + dma_resv_usage_rw(true)))
>
> Should not DMA_RESV_USAGE_BOOKKEEP also considered active (why only "rw")? Some app is syncing with syncjobs and has added dma_fence with DMA_RESV_USAGE_BOOKKEEP during execbuf while that BO is busy on waiting on work!
Hmm do we have a path which adds DMA_RESV_USAGE_BOOKKEEP usage in execbuf?
Rob, any comments here? Given how I basically lifted the logic from
686b21b5f6ca ("drm: Add fdinfo memory stats"), does it sound plausible
to upgrade the test against all fences?
Regards,
Tvrtko
>> + stats[id].active += sz;
>> + else if (i915_gem_object_is_shrinkable(obj) &&
>> + obj->mm.madv == I915_MADV_DONTNEED)
>> + stats[id].purgeable += sz;
>> + }
>> +}
>> +
>> +static void show_meminfo(struct drm_printer *p, struct drm_file *file)
>> +{
>> + struct drm_memory_stats stats[INTEL_REGION_UNKNOWN] = {};
>> + struct drm_i915_file_private *fpriv = file->driver_priv;
>> + struct i915_drm_client *client = fpriv->client;
>> + struct drm_i915_private *i915 = fpriv->i915;
>> + struct drm_i915_gem_object *obj;
>> + struct intel_memory_region *mr;
>> + struct list_head *pos;
>> + unsigned int id;
>> +
>> + /* Public objects. */
>> + spin_lock(&file->table_lock);
>> + idr_for_each_entry (&file->object_idr, obj, id)
>> + obj_meminfo(obj, stats);
>> + spin_unlock(&file->table_lock);
>> +
>> + /* Internal objects. */
>> + rcu_read_lock();
>> + list_for_each_rcu(pos, &client->objects_list) {
>> + obj = i915_gem_object_get_rcu(list_entry(pos, typeof(*obj),
>> + client_link));
>> + if (!obj)
>> + continue;
>> + obj_meminfo(obj, stats);
>> + i915_gem_object_put(obj);
>> + }
>> + rcu_read_unlock();
>> +
>> + for_each_memory_region(mr, i915, id)
>> + drm_print_memory_stats(p,
>> + &stats[id],
>> + DRM_GEM_OBJECT_RESIDENT |
>> + DRM_GEM_OBJECT_PURGEABLE,
>> + mr->name);
>> +}
>> +
>> static const char * const uabi_class_names[] = {
>> [I915_ENGINE_CLASS_RENDER] = "render",
>> [I915_ENGINE_CLASS_COPY] = "copy",
>> @@ -106,6 +189,8 @@ void i915_drm_client_fdinfo(struct drm_printer *p,
>> struct drm_file *file)
>> *
>> ****************************************************************
>> **
>> */
>>
>> + show_meminfo(p, file);
>> +
>> if (GRAPHICS_VER(i915) < 8)
>> return;
>>
>> --
>> 2.39.2
>
More information about the Intel-gfx
mailing list