[PATCH 3/3] drm/xe: Add kernel-doc for CCS mode selection
Lucas De Marchi
lucas.demarchi at intel.com
Tue Jan 16 22:16:14 UTC 2024
On Sat, Dec 16, 2023 at 01:41:18PM -0800, Niranjana Vishwanathapura wrote:
>Move all CCS mode documentation including that of sysfs
>interfaces to a single kernel-doc section.
>
>Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura at intel.com>
>---
> Documentation/gpu/xe/index.rst | 1 +
> Documentation/gpu/xe/xe_ccs_mode.rst | 14 ++++++++++++++
> drivers/gpu/drm/xe/xe_gt_ccs_mode.c | 28 ++++++++++++++++++++++------
> drivers/gpu/drm/xe/xe_gt_types.h | 7 +------
> 4 files changed, 38 insertions(+), 12 deletions(-)
> create mode 100644 Documentation/gpu/xe/xe_ccs_mode.rst
>
>diff --git a/Documentation/gpu/xe/index.rst b/Documentation/gpu/xe/index.rst
>index c224ecaee81e1..1db8d20942b4e 100644
>--- a/Documentation/gpu/xe/index.rst
>+++ b/Documentation/gpu/xe/index.rst
>@@ -14,6 +14,7 @@ DG2, etc is provided to prototype the driver.
> xe_mm
> xe_map
> xe_migrate
>+ xe_ccs_mode
> xe_cs
> xe_pm
> xe_pcode
>diff --git a/Documentation/gpu/xe/xe_ccs_mode.rst b/Documentation/gpu/xe/xe_ccs_mode.rst
>new file mode 100644
>index 0000000000000..9ff07706f9739
>--- /dev/null
>+++ b/Documentation/gpu/xe/xe_ccs_mode.rst
>@@ -0,0 +1,14 @@
>+.. SPDX-License-Identifier: (GPL-2.0+ OR MIT)
>+
>+=========
>+CCS mode
>+=========
>+
>+.. kernel-doc:: drivers/gpu/drm/xe/xe_gt_ccs_mode.c
>+ :doc: CCS mode
>+
>+Internal API
>+============
>+
>+.. kernel-doc:: drivers/gpu/drm/xe/xe_gt_ccs_mode.c
>+ :internal:
>diff --git a/drivers/gpu/drm/xe/xe_gt_ccs_mode.c b/drivers/gpu/drm/xe/xe_gt_ccs_mode.c
>index 173b119a21c00..d338b0859728a 100644
>--- a/drivers/gpu/drm/xe/xe_gt_ccs_mode.c
>+++ b/drivers/gpu/drm/xe/xe_gt_ccs_mode.c
>@@ -12,6 +12,27 @@
> #include "xe_gt_sysfs.h"
> #include "xe_mmio.h"
>
>+/**
>+ * DOC: CCS mode
>+ *
>+ * CCS mode setting allows fixed mapping of available compute slices to
>+ * compute engines. By default only the first available compute engine is
>+ * enabled and all available compute slices are allocated to it.
>+ *
>+ * Below per-tile sysfs interfaces help user to change the CCS mode setting.
>+ *
>+ * 'num_cslices' - This read-only interface returns the number of compute
>+ * slices available.
>+ *
>+ * 'ccs_mode' - Allows user to set the number of compute hardware engines
>+ * to be enabled and to which allocate the available compute slices. This
>+ * user configuration change triggers a gt reset and it is expected that
>+ * there are no open drm clients while doing so. The user configuration
>+ * must allow equal distribution of available compute slices to enabled
must allow? Shouldn't this be something like
"Compute slices is always equally distributed to the compute engines".
Then mention what happens when this is not possible:
/*
* Ensure number of engines specified is valid and there is an
* exact multiple of engines for slices.
*/
num_cslices = hweight32(CCS_MASK(gt));
if (!num_engines || num_engines > num_cslices || num_cslices % num_engines) {
xe_gt_dbg(gt, "Invalid compute config, %d engines %d slices\n",
num_engines, num_cslices);
return -EINVAL;
}
So... what the user needs to know to arrive into a valid solution is
that he must read num_cslices to be able to know what are the possible
configurations.
Lucas De Marchi
>+ * compute hardware engines. Reading it returns the number of compute
>+ * hardware engines currently enabled.
>+ */
>+
> #define CCS_MODE_CSLICE_WIDTH ilog2(CCS_MODE_CSLICE_ASSIGNMENT + 1)
> #define CCS_MODE_CSLICE(cslice, ccs) \
> ((ccs) << ((cslice) * CCS_MODE_CSLICE_WIDTH))
>@@ -174,12 +195,7 @@ static void xe_gt_ccs_mode_sysfs_fini(struct drm_device *drm, void *arg)
> * xe_gt_ccs_mode_sysfs_init - Initialize CCS mode sysfs interfaces
> * @gt: GT structure
> *
>- * Through a per-gt 'ccs_mode' sysfs interface, the user can enable a fixed
>- * number of compute hardware engines to which the available compute slices
>- * are to be allocated. This user configuration change triggers a gt reset
>- * and it is expected that there are no open drm clients while doing so.
>- * The number of available compute slices is exposed to user through a per-gt
>- * 'num_cslices' sysfs interface.
>+ * Create per-tile CCS mode sysfs interfaces.
> */
> void xe_gt_ccs_mode_sysfs_init(struct xe_gt *gt)
> {
>diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
>index f746846604759..9cee17506f8bc 100644
>--- a/drivers/gpu/drm/xe/xe_gt_types.h
>+++ b/drivers/gpu/drm/xe/xe_gt_types.h
>@@ -185,12 +185,7 @@ struct xe_gt {
> spinlock_t lock;
> } tlb_invalidation;
>
>- /**
>- * @ccs_mode: Number of compute engines enabled.
>- * Allows fixed mapping of available compute slices to compute engines.
>- * By default only the first available compute engine is enabled and all
>- * available compute slices are allocated to it.
>- */
>+ /** @ccs_mode: Number of compute engines enabled */
> u32 ccs_mode;
>
> /** @usm: unified shared memory state */
>--
>2.43.0
>
More information about the Intel-xe
mailing list