[Intel-gfx] [PATCH] drm/i915/lmem: debugfs for LMEM details

Chris Wilson chris at chris-wilson.co.uk
Fri Dec 20 13:34:40 UTC 2019


Quoting Ramalingam C (2019-12-20 12:51:16)
> From: Lukasz Fiedorowicz <lukasz.fiedorowicz at intel.com>
> 
> Debugfs i915_gem_object is extended to enable the IGTs to
> detect the LMEM's availability and the total size of LMEM.
> 
> Signed-off-by: Lukasz Fiedorowicz <lukasz.fiedorowicz at intel.com>
> Signed-off-by: Matthew Auld <matthew.auld at intel.com>
> Signed-off-by: Stuart Summers <stuart.summers at intel.com>
> Signed-off-by: Ramalingam C <ramalingam.c at intel.com>
> Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> cc: Chris Wilson <chris at chris-wilson.co.uk>
> ---
>  drivers/gpu/drm/i915/i915_debugfs.c        | 6 +++++-
>  drivers/gpu/drm/i915/intel_memory_region.c | 5 ++++-
>  drivers/gpu/drm/i915/intel_memory_region.h | 3 +++
>  3 files changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
> index d28468eaed57..856ded8cd332 100644
> --- a/drivers/gpu/drm/i915/i915_debugfs.c
> +++ b/drivers/gpu/drm/i915/i915_debugfs.c
> @@ -373,7 +373,11 @@ static int i915_gem_object_info(struct seq_file *m, void *data)
>                    atomic_read(&i915->mm.free_count),
>                    i915->mm.shrink_memory);
>  
> -       seq_putc(m, '\n');
> +       if (HAS_LMEM(i915)) {
> +               seq_printf(m, "LMEM total: %llu bytes, available %llu bytes\n",
> +                          (u64)i915->mm.regions[INTEL_REGION_LMEM]->total,
> +                          (u64)i915->mm.regions[INTEL_REGION_LMEM]->avail);

%pa for resource_size_t

Use READ_ONCE() to indicate to the reader these are being accessed
outside of the mem->lock.

> +       }
>  
>         print_context_stats(m, i915);
>  
> diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c
> index e24c280e5930..15e539de0a82 100644
> --- a/drivers/gpu/drm/i915/intel_memory_region.c
> +++ b/drivers/gpu/drm/i915/intel_memory_region.c
> @@ -37,7 +37,7 @@ __intel_memory_region_put_pages_buddy(struct intel_memory_region *mem,
>                                       struct list_head *blocks)
>  {
>         mutex_lock(&mem->mm_lock);
> -       intel_memory_region_free_pages(mem, blocks);
> +       mem->avail += intel_memory_region_free_pages(mem, blocks);
>         mutex_unlock(&mem->mm_lock);
>  }
>  
> @@ -106,6 +106,7 @@ __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem,
>                         break;
>         } while (1);
>  
> +       mem->avail -= size;
>         mutex_unlock(&mem->mm_lock);
>         return 0;

These two look nice and symmetrical.

>  
> @@ -164,6 +165,8 @@ intel_memory_region_create(struct drm_i915_private *i915,
>         mem->io_start = io_start;
>         mem->min_page_size = min_page_size;
>         mem->ops = ops;
> +       mem->total = size;
> +       mem->avail = mem->total;
>  
>         mutex_init(&mem->objects.lock);
>         INIT_LIST_HEAD(&mem->objects.list);
> diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h
> index 238722009677..da56d8ff1b01 100644
> --- a/drivers/gpu/drm/i915/intel_memory_region.h
> +++ b/drivers/gpu/drm/i915/intel_memory_region.h
> @@ -94,6 +94,9 @@ struct intel_memory_region {
>                 struct list_head list;
>                 struct list_head purgeable;
>         } objects;
> +
> +       resource_size_t total;
> +       resource_size_t avail;

Sensible placement? There'll be one less hole if you put these next to
the other resource_size_t

Fix the nits, and
Reviewed-by: Chris Wilson <chris at chris-wilson.co.uk>
-Chris


More information about the Intel-gfx mailing list