[PATCH 2/2] drm/mm: Some doc polish

Daniel Vetter daniel.vetter at ffwll.ch
Wed Dec 28 15:25:29 UTC 2016


Added some boilerplate for the structs, documented members where they
are relevant and plenty of markup for hyperlinks all over. And a few
small wording polish.

Note that the intro needs some more love after the DRM_MM_INSERT_*
patch from Chris has landed.

Cc: Chris Wilson <chris at chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter at intel.com>
---
 drivers/gpu/drm/drm_mm.c | 39 +++++++++++-----------
 include/drm/drm_mm.h     | 84 +++++++++++++++++++++++++++++++++++++-----------
 2 files changed, 87 insertions(+), 36 deletions(-)

diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
index 1a5b4eba2386..9c5f82ea612f 100644
--- a/drivers/gpu/drm/drm_mm.c
+++ b/drivers/gpu/drm/drm_mm.c
@@ -59,8 +59,8 @@
  *
  * The main data struct is &drm_mm, allocations are tracked in &drm_mm_node.
  * Drivers are free to embed either of them into their own suitable
- * datastructures. drm_mm itself will not do any allocations of its own, so if
- * drivers choose not to embed nodes they need to still allocate them
+ * datastructures. drm_mm itself will not do any memory allocations of its own,
+ * so if drivers choose not to embed nodes they need to still allocate them
  * themselves.
  *
  * The range allocator also supports reservation of preallocated blocks. This is
@@ -78,7 +78,7 @@
  * steep cliff not a real concern. Removing a node again is O(1).
  *
  * drm_mm supports a few features: Alignment and range restrictions can be
- * supplied. Further more every &drm_mm_node has a color value (which is just an
+ * supplied. Furthermore every &drm_mm_node has a color value (which is just an
  * opaque unsigned long) which in conjunction with a driver callback can be used
  * to implement sophisticated placement restrictions. The i915 DRM driver uses
  * this to implement guard pages between incompatible caching domains in the
@@ -296,11 +296,11 @@ static void drm_mm_insert_helper(struct drm_mm_node *hole_node,
  * @mm: drm_mm allocator to insert @node into
  * @node: drm_mm_node to insert
  *
- * This functions inserts an already set-up drm_mm_node into the allocator,
- * meaning that start, size and color must be set by the caller. This is useful
- * to initialize the allocator with preallocated objects which must be set-up
- * before the range allocator can be set-up, e.g. when taking over a firmware
- * framebuffer.
+ * This functions inserts an already set-up &drm_mm_node into the allocator,
+ * meaning that start, size and color must be set by the caller. All other
+ * fields must be cleared to 0. This is useful to initialize the allocator with
+ * preallocated objects which must be set-up before the range allocator can be
+ * set-up, e.g. when taking over a firmware framebuffer.
  *
  * Returns:
  * 0 on success, -ENOSPC if there's no hole where @node is.
@@ -375,7 +375,7 @@ EXPORT_SYMBOL(drm_mm_reserve_node);
  * @sflags: flags to fine-tune the allocation search
  * @aflags: flags to fine-tune the allocation behavior
  *
- * The preallocated node must be cleared to 0.
+ * The preallocated @node must be cleared to 0.
  *
  * Returns:
  * 0 on success, -ENOSPC if there's no suitable hole.
@@ -549,9 +549,11 @@ EXPORT_SYMBOL(drm_mm_replace_node);
  * The DRM range allocator supports this use-case through the scanning
  * interfaces. First a scan operation needs to be initialized with
  * drm_mm_scan_init() or drm_mm_scan_init_with_range(). The driver adds
- * objects to the roster (probably by walking an LRU list, but this can be
- * freely implemented) (using drm_mm_scan_add_block()) until a suitable hole
- * is found or there are no further evictable objects.
+ * objects to the roaster, probably by walking an LRU list, but this can be
+ * freely implemented. Evication candiates are added using
+ * drm_mm_scan_add_block() until a suitable hole is found or there are no
+ * further evictable objects. Eviction roaster metadata is tracked in struct
+ * &drm_mm_scan.
  *
  * The driver must walk through all objects again in exactly the reverse
  * order to restore the allocator state. Note that while the allocator is used
@@ -559,7 +561,7 @@ EXPORT_SYMBOL(drm_mm_replace_node);
  *
  * Finally the driver evicts all objects selected (drm_mm_scan_remove_block()
  * reported true) in the scan, and any overlapping nodes after color adjustment
- * (drm_mm_scan_evict_color()). Adding and removing an object is O(1), and
+ * (drm_mm_scan_color_evict()). Adding and removing an object is O(1), and
  * since freeing a node is also O(1) the overall complexity is
  * O(scanned_objects). So like the free stack which needs to be walked before a
  * scan operation even begins this is linear in the number of objects. It
@@ -705,14 +707,15 @@ EXPORT_SYMBOL(drm_mm_scan_add_block);
  * @scan: the active drm_mm scanner
  * @node: drm_mm_node to remove
  *
- * Nodes _must_ be removed in exactly the reverse order from the scan list as
- * they have been added (e.g. using list_add as they are added and then
- * list_for_each over that eviction list to remove), otherwise the internal
+ * Nodes **must** be removed in exactly the reverse order from the scan list as
+ * they have been added (e.g. using list_add() as they are added and then
+ * list_for_each() over that eviction list to remove), otherwise the internal
  * state of the memory manager will be corrupted.
  *
  * When the scan list is empty, the selected memory nodes can be freed. An
- * immediately following drm_mm_search_free with !DRM_MM_SEARCH_BEST will then
- * return the just freed block (because its at the top of the free_stack list).
+ * immediately following drm_mm_insert_node_in_range_generic() or one of the
+ * simpler versions of that function with !DRM_MM_SEARCH_BEST will then return
+ * the just freed block (because its at the top of the free_stack list).
  *
  * Returns:
  * True if this block should be evicted, false otherwise. Will always
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h
index 92ec5759caae..8943d75ab8cc 100644
--- a/include/drm/drm_mm.h
+++ b/include/drm/drm_mm.h
@@ -69,16 +69,29 @@ enum drm_mm_allocator_flags {
 #define DRM_MM_BOTTOMUP DRM_MM_SEARCH_DEFAULT, DRM_MM_CREATE_DEFAULT
 #define DRM_MM_TOPDOWN DRM_MM_SEARCH_BELOW, DRM_MM_CREATE_TOP
 
+/**
+ * struct drm_mm_node - allocated block in the DRM allocator
+ *
+ * This represents an allocated block in a &drm_mm allocator. Except for
+ * pre-reserved nodes inserted using drm_mm_reserve_node() the structure is
+ * entirely opaque and should only be accessed through the provided funcions.
+ * Since allocation of these nodes is entirely handled by the driver they can be
+ * embedded.
+ */
 struct drm_mm_node {
+	/** @color: Opaque driver-private tag. */
+	unsigned long color;
+	/** @start: Start address of the allocated block. */
+	u64 start;
+	/** @size: Size of the allocated block. */
+	u64 size;
+	/* private: */
 	struct list_head node_list;
 	struct list_head hole_stack;
 	struct rb_node rb;
 	unsigned hole_follows : 1;
 	unsigned allocated : 1;
 	bool scanned_block : 1;
-	unsigned long color;
-	u64 start;
-	u64 size;
 	u64 __subtree_last;
 	struct drm_mm *mm;
 #ifdef CONFIG_DRM_DEBUG_MM
@@ -86,7 +99,29 @@ struct drm_mm_node {
 #endif
 };
 
+/**
+ * struct drm_mm - DRM allocator
+ *
+ * DRM range allocator with a few special functions and features geared towards
+ * managing GPU memory. Except for the @color_adjust callback the structure is
+ * entirely opaque and should only be accessed through the provided functions
+ * and macros. This structure can be embedded into larger driver structures.
+ */
 struct drm_mm {
+	/**
+	 * @color_adjust:
+	 *
+	 * Optional driver callback to further apply restrictions on a hole. The
+	 * node argument points at the node containing the hole from which the
+	 * block would be allocated (see drm_mm_hole_follows() and friends). The
+	 * other arguments are the size of the block to be allocated. The driver
+	 * can adjust the start and end as needed to e.g. insert guard pages.
+	 */
+	void (*color_adjust)(const struct drm_mm_node *node,
+			     unsigned long color,
+			     u64 *start, u64 *end);
+
+	/* private: */
 	/* List of all memory nodes that immediately precede a free hole. */
 	struct list_head hole_stack;
 	/* head_node.node_list is the list of all memory nodes, ordered
@@ -95,14 +130,20 @@ struct drm_mm {
 	/* Keep an interval_tree for fast lookup of drm_mm_nodes by address. */
 	struct rb_root interval_tree;
 
-	void (*color_adjust)(const struct drm_mm_node *node,
-			     unsigned long color,
-			     u64 *start, u64 *end);
-
 	unsigned long scan_active;
 };
 
+/**
+ * struct drm_mm_scan - DRM allocator eviction roaster data
+ *
+ * This structure tracks data needed for the eviction roaster set up using
+ * drm_mm_scan_init(), and used with drm_mm_scan_add_block() and
+ * drm_mm_scan_remove_block(). The structure is entirely opaque and should only
+ * be accessed through the provided functions and macros. It is meant to be
+ * allocated temporarily by the driver on the stack.
+ */
 struct drm_mm_scan {
+	/* private: */
 	struct drm_mm *mm;
 
 	u64 size;
@@ -161,7 +202,8 @@ static inline bool drm_mm_initialized(const struct drm_mm *mm)
  *
  * Holes are embedded into the drm_mm using the tail of a drm_mm_node.
  * If you wish to know whether a hole follows this particular node,
- * query this function.
+ * query this function. See also drm_mm_hole_node_start() and
+ * drm_mm_hole_node_end().
  *
  * Returns:
  * True if a hole follows the @node.
@@ -230,23 +272,23 @@ static inline u64 drm_mm_hole_node_end(const struct drm_mm_node *hole_node)
 
 /**
  * drm_mm_for_each_node - iterator to walk over all allocated nodes
- * @entry: drm_mm_node structure to assign to in each iteration step
- * @mm: drm_mm allocator to walk
+ * @entry: &drm_mm_node structure to assign to in each iteration step
+ * @mm: &drm_mm allocator to walk
  *
  * This iterator walks over all nodes in the range allocator. It is implemented
- * with list_for_each, so not save against removal of elements.
+ * with list_for_each(), so not save against removal of elements.
  */
 #define drm_mm_for_each_node(entry, mm) \
 	list_for_each_entry(entry, drm_mm_nodes(mm), node_list)
 
 /**
  * drm_mm_for_each_node_safe - iterator to walk over all allocated nodes
- * @entry: drm_mm_node structure to assign to in each iteration step
- * @next: drm_mm_node structure to store the next step
- * @mm: drm_mm allocator to walk
+ * @entry: &drm_mm_node structure to assign to in each iteration step
+ * @next: &drm_mm_node structure to store the next step
+ * @mm: &drm_mm allocator to walk
  *
  * This iterator walks over all nodes in the range allocator. It is implemented
- * with list_for_each_safe, so save against removal of elements.
+ * with list_for_each_safe(), so save against removal of elements.
  */
 #define drm_mm_for_each_node_safe(entry, next, mm) \
 	list_for_each_entry_safe(entry, next, drm_mm_nodes(mm), node_list)
@@ -261,13 +303,13 @@ static inline u64 drm_mm_hole_node_end(const struct drm_mm_node *hole_node)
 
 /**
  * drm_mm_for_each_hole - iterator to walk over all holes
- * @entry: drm_mm_node used internally to track progress
- * @mm: drm_mm allocator to walk
+ * @entry: &drm_mm_node used internally to track progress
+ * @mm: &drm_mm allocator to walk
  * @hole_start: ulong variable to assign the hole start to on each iteration
  * @hole_end: ulong variable to assign the hole end to on each iteration
  *
  * This iterator walks over all holes in the range allocator. It is implemented
- * with list_for_each, so not save against removal of elements. @entry is used
+ * with list_for_each(), so not save against removal of elements. @entry is used
  * internally and will not reflect a real drm_mm_node for the very first hole.
  * Hence users of this iterator may not access it.
  *
@@ -336,6 +378,9 @@ static inline int drm_mm_insert_node_in_range(struct drm_mm *mm,
  * @sflags: flags to fine-tune the allocation search
  * @aflags: flags to fine-tune the allocation behavior
  *
+ * This is a simplified version of drm_mm_insert_node_in_range_generic() with no
+ * range restrictions applied.
+ *
  * The preallocated node must be cleared to 0.
  *
  * Returns:
@@ -436,6 +481,9 @@ void drm_mm_scan_init_with_range(struct drm_mm_scan *scan,
  * @color: opaque tag value to use for the allocation
  * @flags: flags to specify how the allocation will be performed afterwards
  *
+ * This is a simplified version of drm_mm_scan_init_with_range() with no range
+ * restrictions applied.
+ *
  * This simply sets up the scanning routines with the parameters for the desired
  * hole.
  *
-- 
2.7.4



More information about the dri-devel mailing list