[PATCH v5 8/9] drm/i915/gem: Add extra pages in ttm_tt for ccs data
Ramalingam C
ramalingam.c at intel.com
Mon Mar 28 18:57:12 UTC 2022
On 2022-03-24 at 17:28:08 +0100, Thomas Hellström wrote:
>
> On 3/21/22 23:44, Ramalingam C wrote:
> > On Xe-HP and later devices, dedicated compression control state (CCS)
> > stored in local memory is used for each surface, to support the
> > 3D and media compression formats.
> >
> > The memory required for the CCS of the entire local memory is 1/256 of
> > the local memory size. So before the kernel boot, the required memory
> > is reserved for the CCS data and a secure register will be programmed
> > with the CCS base address
> >
> > So when an object is allocated in local memory, dont need to explicitly
> > allocate the space for ccs data. But when the obj is evicted into the
> > smem, to hold the compression related data along with the obj extra space
> > is needed in smem. i.e obj_size + (obj_size/256).
> >
> > Hence when a smem pages are allocated for an obj with lmem placement
> > possibility we create with the extra pages required for the ccs data for
> > the obj size.
> >
> > v2:
> > Used imperative wording [Thomas]
> > v3:
> > Inflate the pages only when obj's placement is lmem only
> >
> > Signed-off-by: Ramalingam C <ramalingam.c at intel.com>
> > cc: Christian Koenig <christian.koenig at amd.com>
> > cc: Hellstrom Thomas <thomas.hellstrom at intel.com>
> > Reviewed-by: Thomas Hellstrom <thomas.hellstrom at linux.intel.com>
> > Reviewed-by: Nirmoy Das <nirmoy.das at intel.com>
> > ---
> > drivers/gpu/drm/i915/gem/i915_gem_ttm.c | 29 ++++++++++++++++++++++++-
> > 1 file changed, 28 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> > index 3b9f99c765c4..0305a150b9d4 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_ttm.c
> > @@ -20,6 +20,7 @@
> > #include "gem/i915_gem_ttm.h"
> > #include "gem/i915_gem_ttm_move.h"
> > #include "gem/i915_gem_ttm_pm.h"
> > +#include "gt/intel_gpu_commands.h"
> > #define I915_TTM_PRIO_PURGE 0
> > #define I915_TTM_PRIO_NO_PAGES 1
> > @@ -262,12 +263,33 @@ static const struct i915_refct_sgt_ops tt_rsgt_ops = {
> > .release = i915_ttm_tt_release
> > };
> > +static inline bool
> > +i915_gem_object_needs_ccs_pages(struct drm_i915_gem_object *obj)
> > +{
> > + bool lmem_placement = false;
> > + int i;
> > +
> > + for (i = 0; i < obj->mm.n_placements; i++) {
> > + /* Compression is not allowed for the objects with smem placement */
> > + if (obj->mm.placements[i]->type == INTEL_MEMORY_SYSTEM)
> > + return false;
> > + if (!lmem_placement &&
> > + obj->mm.placements[i]->type == INTEL_MEMORY_LOCAL)
> > + lmem_placement = true;
> > + }
> > +
> > + return lmem_placement;
> > +}
> > +
> > static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo,
> > uint32_t page_flags)
> > {
> > + struct drm_i915_private *i915 = container_of(bo->bdev, typeof(*i915),
> > + bdev);
> > struct ttm_resource_manager *man =
> > ttm_manager_type(bo->bdev, bo->resource->mem_type);
> > struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo);
> > + unsigned long ccs_pages = 0;
> > enum ttm_caching caching;
> > struct i915_ttm_tt *i915_tt;
> > int ret;
> > @@ -290,7 +312,12 @@ static struct ttm_tt *i915_ttm_tt_create(struct ttm_buffer_object *bo,
> > i915_tt->is_shmem = true;
> > }
> > - ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching, 0);
> > + if (HAS_FLAT_CCS(i915) && i915_gem_object_needs_ccs_pages(obj))
> > + ccs_pages = DIV_ROUND_UP(DIV_ROUND_UP(bo->base.size,
> > + NUM_BYTES_PER_CCS_BYTE),
> > + PAGE_SIZE);
> > +
> > + ret = ttm_tt_init(&i915_tt->ttm, bo, page_flags, caching, ccs_pages);
> > if (ret)
> > goto err_free;
>
> Since we need to respin could we add (in __i915_ttm_get_pages())
>
> /* Verify that gem never sees inflated system pages. Keep that local to ttm
> */GEM_BUG_ON(bo->ttm && ((obj->base.size >> PAGE_SHIFT) <
> bo->ttm->num_pages))
Adding this gem warn on in next ver.
Ram
>
> /Thomas
>
>
>
More information about the dri-devel
mailing list