[Mesa-dev] [PATCH] r600g/compute: Add information about how compute_memory_pool works
Bruno Jiménez
brunojimen at gmail.com
Thu Aug 7 03:22:24 PDT 2014
---
NOTE: if the two patches I have just send for tracking how buffers are mapped
are good, we may drop the last item from the TODO list.
src/gallium/drivers/r600/compute_memory_pool.c | 47 ++++++++++++++++++++++++++
1 file changed, 47 insertions(+)
diff --git a/src/gallium/drivers/r600/compute_memory_pool.c b/src/gallium/drivers/r600/compute_memory_pool.c
index 0ee8ceb..58f07c0 100644
--- a/src/gallium/drivers/r600/compute_memory_pool.c
+++ b/src/gallium/drivers/r600/compute_memory_pool.c
@@ -22,6 +22,53 @@
* Adam Rak <adam.rak at streamnovation.com>
*/
+/**
+ * \file compute_memory_pool.c
+ * Pool for computation resources
+ *
+ * Summary of how it works:
+ * First, items are added to the pool by compute_memory_alloc. These
+ * items aren't allocated yet, don't have a position at the pool
+ * and are added to the \a unallocated_list.
+ * Now, when this items are actually used in a kernel, they are
+ * promoted to the pool by compute_memory_promote_item. This means
+ * that they will be placed somewhere in the pool and they will be
+ * moved from the \a unallocated_list to the \a item_list.
+ * The process of ensuring that there's enough memory for all the items
+ * and that everything goes to its correct place is done by
+ * compute_memory_finalize_pending. This function first sums the
+ * size of all the items that are already in the pool and of those
+ * that will be added. If the size is more than the total of the pool
+ * then it will be grown so that all the items will fit. After this
+ * if the pool is fragmented (meaning that there may be gaps between
+ * items) it will be defragmented. Finally, all the items marked
+ * for promotion are promoted to the pool.
+ * For now, the defragmentation is very simple: it just loops for all
+ * the allocated items checking where they should start, and if any
+ * item starts further back than where it should, it is moved forward.
+ *
+ * Mapping buffers is done in an inefficient way, result of the limitation
+ * of not being able to have buffers mapped when a pool is grown,
+ * which should only happen at the moment of launching kernels.
+ * When a buffer is going to be mapped, first it is demoted from the
+ * pool, so we can assure that its position will never change, even
+ * if the pool gets relocated as a result of it being grown.
+ * This means that we have to actually copy the whole buffer to a new
+ * resource before mapping it.
+ * As an example of why we have to do this imagine this case:
+ * Pool with a size of 16 with buffers A, B, C and D (size 4 each)
+ * We map buffer A and launch a kernel that needs of a new item E.
+ * As we need to add a new item, the pool will grow, possibly
+ * geting relocated in the process. This means that the mapping
+ * for buffer A won't be valid any more.
+ *
+ * Things still TODO:
+ * - Find a better way of having items mapped when a kernel is launched
+ * - Find a better way of adding items to the pool when it is fragmented
+ * - Decide what to do with '{post,pre}alloc_chunk' now that they aren't used
+ * - Actually be able to remove the 'ITEM_MAPPED_FOR_READING' status
+ */
+
#include "pipe/p_defines.h"
#include "pipe/p_state.h"
#include "pipe/p_context.h"
--
2.0.4
More information about the mesa-dev
mailing list