[Intel-xe] [PATCH 1/2] drm/xe: Move the memory region struct out xe_tile

Zeng, Oak oak.zeng at intel.com
Tue Aug 8 23:01:56 UTC 2023



Thanks,
Oak

> -----Original Message-----
> From: Ruhl, Michael J <michael.j.ruhl at intel.com>
> Sent: August 8, 2023 4:42 PM
> To: Zeng, Oak <oak.zeng at intel.com>; intel-xe at lists.freedesktop.org
> Cc: --cc=michael.j.ruhl at intel.com; Kershner, David <david.kershner at intel.com>
> Subject: RE: [Intel-xe] [PATCH 1/2] drm/xe: Move the memory region struct out
> xe_tile
> 
> >-----Original Message-----
> >From: Intel-xe <intel-xe-bounces at lists.freedesktop.org> On Behalf Of Oak
> >Zeng
> >Sent: Monday, August 7, 2023 10:44 PM
> >To: intel-xe at lists.freedesktop.org
> >Cc: --cc=michael.j.ruhl at intel.com
> >Subject: [Intel-xe] [PATCH 1/2] drm/xe: Move the memory region struct out
> >xe_tile
> >
> >Make a xe_mem_region structure which will be used in the
> >coming patches. The new structure is used in both xe device
> >level (xe->mem.vram) and xe_tile level (tile->vram).
> >
> >Make the definition of xe_mem_region.base to be the DPA of
> >the vram and change codes according to this new definition.
> >
> >Also add a pagemap member to this structure. pagemap will be
> >used to map GPU local memory to CPU's HPA space as ZONE_DEVICE.
> >This is preparation work for introducing svm (shared virtual memory)
> >
> >Signed-off-by: Oak Zeng <oak.zeng at intel.com>
> >---
> > drivers/gpu/drm/xe/xe_bo.c           |  2 +-
> > drivers/gpu/drm/xe/xe_device_types.h | 96 ++++++++++++----------------
> > drivers/gpu/drm/xe/xe_migrate.c      |  2 +-
> > drivers/gpu/drm/xe/xe_mmio.c         |  7 +-
> > 4 files changed, 48 insertions(+), 59 deletions(-)
> >
> >diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
> >index 284c86107a5f..a397a7224a99 100644
> >--- a/drivers/gpu/drm/xe/xe_bo.c
> >+++ b/drivers/gpu/drm/xe/xe_bo.c
> >@@ -1457,7 +1457,7 @@ uint64_t vram_region_gpu_offset(struct
> >ttm_resource *res)
> > 	if (res->mem_type == XE_PL_STOLEN)
> > 		return xe_ttm_stolen_gpu_offset(xe);
> >
> >-	return xe->mem.vram.base + tile->mem.vram.base;
> >+	return tile->mem.vram.base;
> > }
> >
> > /**
> >diff --git a/drivers/gpu/drm/xe/xe_device_types.h
> >b/drivers/gpu/drm/xe/xe_device_types.h
> >index f84ecb976f5d..a454a5098121 100644
> >--- a/drivers/gpu/drm/xe/xe_device_types.h
> >+++ b/drivers/gpu/drm/xe/xe_device_types.h
> >@@ -57,6 +57,46 @@ struct xe_ggtt;
> > 		 const struct xe_tile * : (const struct xe_device *)((tile__)-
> >>xe),	\
> > 		 struct xe_tile * : (tile__)->xe)
> >
> >+/**
> >+ * struct xe_mem_region - memory region structure
> >+ * This is used to describe a memory region in xe
> >+ * device, such as HBM memory or CXL extension memory.
> >+ */
> >+struct xe_mem_region {
> >+	/** @io_start: IO start address of this VRAM instance */
> >+	resource_size_t io_start;
> >+	/**
> >+	 * @io_size: IO size of this VRAM instance
> >+	 *
> >+	 * This represents how much of this VRAM we can access
> >+	 * via the CPU through the VRAM BAR. This can be smaller
> >+	 * than @usable_size, in which case only part of VRAM is CPU
> >+	 * accessible (typically the first 256M). This
> >+	 * configuration is known as small-bar.
> >+	 */
> >+	resource_size_t io_size;
> >+	/** @base: This memory regions's DPA (device physical address) */
> >+	resource_size_t base;
> 
> I think you can eliminate this value....  Like your comment (later indicates) this
> value will be derived differently and keeping it here doesn't have any benefit.

This is a memory region's definition. I don't get why a memory region can't have a DPA.

I use xe_mem_region definition at both device level and tile level. For me, it is reasonable each tile's HBM has a DPA. So we need to keep the base member for tile's memory region.

At device level, I don't see base is meaningful because memory really belongs to tile, not device. Maybe we can just leave device's base unset, or set it to the first tile's DPA.

> 
> 
> >+	/**
> >+	 * @usable_size: usable size of VRAM
> >+	 *
> >+	 * Usable size of VRAM excluding reserved portions
> >+	 * (e.g stolen mem)
> >+	 */
> >+	resource_size_t usable_size;
> >+	/**
> >+	 * @actual_physical_size: Actual VRAM size
> >+	 *
> >+	 * Actual VRAM size including reserved portions
> >+	 * (e.g stolen mem)
> >+	 */
> >+	resource_size_t actual_physical_size;
> >+	/** @mapping: pointer to VRAM mappable space */
> >+	void *__iomem mapping;
> >+	/** @pagemap: Used to map device memory as ZONE_DEVICE */
> >+	struct dev_pagemap pagemap;
> >+};
> >+
> > /**
> >  * struct xe_tile - hardware tile structure
> >  *
> >@@ -114,38 +154,7 @@ struct xe_tile {
> > 		 * Although VRAM is associated with a specific tile, it can
> > 		 * still be accessed by all tiles' GTs.
> > 		 */
> >-		struct {
> >-			/** @io_start: IO start address of this VRAM instance
> >*/
> >-			resource_size_t io_start;
> >-			/**
> >-			 * @io_size: IO size of this VRAM instance
> >-			 *
> >-			 * This represents how much of this VRAM we can
> >access
> >-			 * via the CPU through the VRAM BAR. This can be
> >smaller
> >-			 * than @size, in which case only part of VRAM is CPU
> >-			 * accessible (typically the first 256M). This
> >-			 * configuration is known as small-bar.
> >-			 */
> >-			resource_size_t io_size;
> >-			/** @base: offset of VRAM starting base */
> >-			resource_size_t base;
> >-			/**
> >-			 * @usable_size: usable size of VRAM
> >-			 *
> >-			 * Usable size of VRAM excluding reserved portions
> >-			 * (e.g stolen mem)
> >-			 */
> >-			resource_size_t usable_size;
> >-			/**
> >-			 * @actual_physical_size: Actual VRAM size
> >-			 *
> >-			 * Actual VRAM size including reserved portions
> >-			 * (e.g stolen mem)
> >-			 */
> >-			resource_size_t actual_physical_size;
> >-			/** @mapping: pointer to VRAM mappable space */
> >-			void *__iomem mapping;
> >-		} vram;
> >+		struct xe_mem_region vram;
> >
> > 		/** @vram_mgr: VRAM TTM manager */
> > 		struct xe_ttm_vram_mgr *vram_mgr;
> >@@ -264,28 +273,7 @@ struct xe_device {
> > 	/** @mem: memory info for device */
> > 	struct {
> > 		/** @vram: VRAM info for device */
> >-		struct {
> >-			/** @io_start: IO start address of VRAM */
> >-			resource_size_t io_start;
> >-			/**
> >-			 * @io_size: IO size of VRAM.
> >-			 *
> >-			 * This represents how much of VRAM the CPU can
> >access
> >-			 * via the VRAM BAR.
> >-			 * On systems that do not support large BAR IO space,
> >-			 * this can be smaller than the actual memory size, in
> >-			 * which case only part of VRAM is CPU accessible
> >-			 * (typically the first 256M).  This configuration is
> >-			 * known as small-bar.
> >-			 */
> >-			resource_size_t io_size;
> >-			/** @size: Total size of VRAM */
> >-			resource_size_t size;
> >-			/** @base: Offset to apply for Device Physical
> >Address control */
> >-			resource_size_t base;
> >-			/** @mapping: pointer to VRAM mappable space */
> >-			void *__iomem mapping;
> >-		} vram;
> >+		struct xe_mem_region vram;
> > 		/** @sys_mgr: system TTM manager */
> > 		struct ttm_resource_manager sys_mgr;
> > 	} mem;
> >diff --git a/drivers/gpu/drm/xe/xe_migrate.c
> >b/drivers/gpu/drm/xe/xe_migrate.c
> >index 18c94022930f..956a96b38346 100644
> >--- a/drivers/gpu/drm/xe/xe_migrate.c
> >+++ b/drivers/gpu/drm/xe/xe_migrate.c
> >@@ -264,7 +264,7 @@ static int xe_migrate_prepare_vm(struct xe_tile *tile,
> >struct xe_migrate *m,
> > 		 * Use 1GB pages, it shouldn't matter the physical amount of
> > 		 * vram is less, when we don't access it.
> > 		 */
> >-		for (pos = 0; pos < xe->mem.vram.size; pos += SZ_1G, ofs +=
> >8)
> >+		for (pos = 0; pos < xe->mem.vram.actual_physical_size; pos
> >+= SZ_1G, ofs += 8)
> > 			xe_map_wr(xe, &bo->vmap, ofs, u64, pos | flags);
> > 	}
> >
> >diff --git a/drivers/gpu/drm/xe/xe_mmio.c b/drivers/gpu/drm/xe/xe_mmio.c
> >index aa9c573b1243..0d0966691eaa 100644
> >--- a/drivers/gpu/drm/xe/xe_mmio.c
> >+++ b/drivers/gpu/drm/xe/xe_mmio.c
> >@@ -173,6 +173,7 @@ static int xe_determine_lmem_bar_size(struct
> >xe_device *xe)
> > 	if (!xe->mem.vram.io_size)
> > 		return -EIO;
> >
> >+	/* XXX: Is this correct? Device's DPA should be decided by IAF */
> 
> Yeah, let's get rid of this... When the driver code that use DPA comes along, we
> we can add it to the tile->mem.vram.base  instead of using this value.

Ok, I will delete below line.
> 
> > 	xe->mem.vram.base = 0; /* DPA offset */
> >
> > 	/* set up a map to the total memory area. */
> >@@ -281,7 +282,7 @@ int xe_mmio_probe_vram(struct xe_device *xe)
> > 			return -ENODEV;
> > 		}
> >
> >-		tile->mem.vram.base = tile_offset;
> >+		tile->mem.vram.base = xe->mem.vram.base + tile_offset;
> 
> So remove the vram.base from here.
> 
> When the DPA code comes into play, it will get added back with the appropriate
> value.

I think we can still have tile->mem.vram.base. Just set it to tile_offset now. And add a comment saying needs to add device's DPA (which is from IAF module).

I will need this tile->mem.vram.base member in the coming svm patch series.

 Agree?

Oak

> 
> With those changes:
> 
> Reviewed-by: Michael J. Ruhl <michael.j.ruhl at intel.com>
> 
> M
> 
> 
> > 		tile->mem.vram.usable_size = vram_size;
> > 		tile->mem.vram.mapping = xe->mem.vram.mapping +
> >tile_offset;
> >
> >@@ -304,10 +305,10 @@ int xe_mmio_probe_vram(struct xe_device *xe)
> > 		io_size -= min_t(u64, tile_size, io_size);
> > 	}
> >
> >-	xe->mem.vram.size = total_size;
> >+	xe->mem.vram.actual_physical_size = total_size;
> >
> > 	drm_info(&xe->drm, "Total VRAM: %pa, %pa\n", &xe-
> >>mem.vram.io_start,
> >-		 &xe->mem.vram.size);
> >+		 &xe->mem.vram.actual_physical_size);
> > 	drm_info(&xe->drm, "Available VRAM: %pa, %pa\n", &xe-
> >>mem.vram.io_start,
> > 		 &available_size);
> >
> >--
> >2.26.3



More information about the Intel-xe mailing list