<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Apr 3, 2017 at 7:44 AM, Jason Ekstrand <span dir="ltr"><<a href="mailto:jason@jlekstrand.net" target="_blank">jason@jlekstrand.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="">On Mon, Apr 3, 2017 at 5:02 AM, Juan A. Suarez Romero <span dir="ltr"><<a href="mailto:jasuarez@igalia.com" target="_blank">jasuarez@igalia.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>On Wed, 2017-03-29 at 12:06 -0700, Jason Ekstrand wrote:<br>
> Looking over the patch, I think I've convinced myself that it's correct. (I honestly wasn't expecting to come to that conclusion without more iteration.) That said, this raises some interesting questions. I added Kristian to the Cc in case he has any input.<br>
><br>
> 1. Should we do powers of two or linear. I'm still a fan of powers of two.<br>
<br>
</span>In which specific part do you mean? In free block lists? or in the<br>
allocator_new?<span><br></span></blockquote><div><br></div></span><div>In the state pool, we just round all allocation sizes up to the nearest power-of-two, and then the index into the array of free lists isn't "size - base", it's "ilog2(size) - base".<br></div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>
><br>
> 2. Should block pools even have a block size at all? We could just make every block pool allow any power-of-two size from 4 KiB up to. say, 1 MiB and then make the block size part of the state pool or stream that's allocating from it. At the moment, I like this idea, but I've given it very little thought.<br>
><br>
</span>So IIUC, the idea would be the block pool is just a flat chunk of<br>
memory, where we later fetch blocks of memory from, as requested by<br>
applications. Is that correct?<br></blockquote></span></div></div></div></blockquote><div><br></div><div>Thinking about this again, I think your statement may have been more correct than I thought. If we make the state_stream chain off of a state_pool rather than the block_pool, we could make the block pool structure a simple bi-directional growing BO and trust in the state pool for 100% of the re-use. That would probably be a way simpler structure. For that matter, the state pool could just own the block_pool and setup/teardown would be easier.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"></blockquote></span><div>Sorry, but this patch gave me some sudden revelations and things are still in the process of reforming in my brain. In the past, we assumed a two-layered allocation strategy where we had a block pool which was the base and then the state pool and state stream allocators sat on top of it. Originally, the state pool allocator was just for a single size as well.<br><br></div><div>Now that the block pool is going to be capable of allocating multiple sizes, the whole mental model of the separation falls apart. The new future that I see is one where the block pool and state pool aren't separate. Instead, we have a single thing which I'll call state_pool (we have to pick one of the names) which lets you allocate a state of any size from 32B to 2MB. The state stream will then allocate off of a state_pool instead of a block_pool since they're now the same. For smaller states, we still want to allocate reasonably large chunks (probably 4K) so that we ensure that things are nicely aligned. I think I've got a pretty good idea of how it should work at this point and can write more if you'd like.<br><br></div><div>Before we dive in and do a pile of refactoring, I think this patch is pretty much good-to-go assuming we fix the power-of-two thing and it fixes a bug so let's focus there. Are you interested in doing the refactoring? If not, that's ok, I'm happy to do it and it wouldn't take me long now that I've gotten a chance to think about it. If you are interested, go for it!<br></div><div><div class="h5"><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="m_6482888752956205086HOEnZb"><div class="m_6482888752956205086h5">
> 3. If we go with the idea in 2. should we still call it block_pool? I think we can keep the name but it doesn't it as well as it once did.<br>
><br>
> Thanks for working on this! I'm sorry it's taken so long to respond. Every time I've looked at it, my brain hasn't been in the right state to think about lock-free code. :-/<br>
><br>
> On Wed, Mar 15, 2017 at 5:05 AM, Juan A. Suarez Romero <<a href="mailto:jasuarez@igalia.com" target="_blank">jasuarez@igalia.com</a>> wrote:<br>
> > Current Anv allocator assign memory in terms of a fixed block size.<br>
> ><br>
> > But there can be cases where this block is not enough for a memory<br>
> > request, and thus several blocks must be assigned in a row.<br>
> ><br>
> > This commit adds support for specifying how many blocks of memory must<br>
> > be assigned.<br>
> ><br>
> > This fixes a number dEQP-VK.pipeline.render_to_ima<wbr>ge.* tests that crash.<br>
> ><br>
> > v2: lock-free free-list is not handled correctly (Jason)<br>
> > ---<br>
> > src/intel/vulkan/anv_allocato<wbr>r.c | 81 +++++++++++++++++++++++++++---<wbr>--------<br>
> > src/intel/vulkan/anv_batch_ch<wbr>ain.c | 4 +-<br>
> > src/intel/vulkan/anv_private.<wbr>h | 7 +++-<br>
> > 3 files changed, 66 insertions(+), 26 deletions(-)<br>
> ><br>
> > diff --git a/src/intel/vulkan/anv_allocat<wbr>or.c b/src/intel/vulkan/anv_allocat<wbr>or.c<br>
> > index 45c663b..3924551 100644<br>
> > --- a/src/intel/vulkan/anv_allocat<wbr>or.c<br>
> > +++ b/src/intel/vulkan/anv_allocat<wbr>or.c<br>
> > @@ -257,7 +257,8 @@ anv_block_pool_init(struct anv_block_pool *pool,<br>
> > pool->device = device;<br>
> > anv_bo_init(&pool->bo, 0, 0);<br>
> > pool->block_size = block_size;<br>
> > - pool->free_list = ANV_FREE_LIST_EMPTY;<br>
> > + for (uint32_t i = 0; i < ANV_MAX_BLOCKS; i++)<br>
> > + pool->free_list[i] = ANV_FREE_LIST_EMPTY;<br>
> > pool->back_free_list = ANV_FREE_LIST_EMPTY;<br>
> ><br>
> > pool->fd = memfd_create("block pool", MFD_CLOEXEC);<br>
> > @@ -500,30 +501,35 @@ fail:<br>
> ><br>
> > static uint32_t<br>
> > anv_block_pool_alloc_new(stru<wbr>ct anv_block_pool *pool,<br>
> > - struct anv_block_state *pool_state)<br>
> > + struct anv_block_state *pool_state,<br>
> > + uint32_t n_blocks)<br>
><br>
> Maybe have this take a size rather than n_blocks? It's only ever called by stuff in the block pool so the caller can do the multiplication. It would certainly make some of the math below easier.<br>
> <br>
> > {<br>
> > struct anv_block_state state, old, new;<br>
> ><br>
> > while (1) {<br>
> > - state.u64 = __sync_fetch_and_add(&pool_sta<wbr>te->u64, pool->block_size);<br>
> > - if (state.next < state.end) {<br>
> > + state.u64 = __sync_fetch_and_add(&pool_sta<wbr>te->u64, n_blocks * pool->block_size);<br>
> > + if (state.next > state.end) {<br>
> > + futex_wait(&pool_state->end, state.end);<br>
> > + continue;<br>
> > + } else if ((state.next + (n_blocks - 1) * pool->block_size) < state.end) {<br>
><br>
> First off, please keep the if's in the same order unless we have a reason to re-arrange them. It would make this way easier to review. :-)<br>
><br>
> Second, I think this would be much easier to read as:<br>
><br>
> if (state.next + size <= state.end) {<br>
> /* Success */<br>
> } else if (state.next <= state.end) {<br>
> /* Our block is the one that crosses the line */<br>
> } else {<br>
> /* Wait like everyone else */<br>
> }<br>
> <br>
> > assert(pool->map);<br>
> > return state.next;<br>
> > - } else if (state.next == state.end) {<br>
> > - /* We allocated the first block outside the pool, we have to grow it.<br>
> > - * pool_state->next acts a mutex: threads who try to allocate now will<br>
> > - * get block indexes above the current limit and hit futex_wait<br>
> > - * below. */<br>
> > - new.next = state.next + pool->block_size;<br>
> > + } else {<br>
> > + /* We allocated the firsts blocks outside the pool, we have to grow<br>
> > + * it. pool_state->next acts a mutex: threads who try to allocate<br>
> > + * now will get block indexes above the current limit and hit<br>
> > + * futex_wait below.<br>
> > + */<br>
> > + new.next = state.next + n_blocks * pool->block_size;<br>
> > new.end = anv_block_pool_grow(pool, pool_state);<br>
> > + /* We assume that just growing once the pool is enough to fulfil the<br>
> > + * memory requirements<br>
> > + */<br>
><br>
> I think this is probably a reasonable assumption. That said, it wouldn't hurt to add a size parameter to block_pool_grow but I don't know that it's needed.<br>
> <br>
> > assert(new.end >= new.next && new.end % pool->block_size == 0);<br>
> > old.u64 = __sync_lock_test_and_set(&pool<wbr>_state->u64, new.u64);<br>
> > if (old.next != state.next)<br>
> > futex_wake(&pool_state->end, INT_MAX);<br>
> > return state.next;<br>
> > - } else {<br>
> > - futex_wait(&pool_state->end, state.end);<br>
> > - continue;<br>
> > }<br>
> > }<br>
> > }<br>
> > @@ -531,16 +537,38 @@ anv_block_pool_alloc_new(struc<wbr>t anv_block_pool *pool,<br>
> > int32_t<br>
> > anv_block_pool_alloc(struct anv_block_pool *pool)<br>
> > {<br>
> > + return anv_block_pool_alloc_n(pool, 1);<br>
> > +}<br>
> > +<br>
> > +int32_t<br>
> > +anv_block_pool_alloc_n(struct anv_block_pool *pool, uint32_t n_blocks)<br>
> > +{<br>
> > int32_t offset;<br>
> ><br>
> > + assert(n_blocks >= 1 && n_blocks <= ANV_MAX_BLOCKS);<br>
><br>
> The more I look at this, the more I want it to be in powers of 2.<br>
> <br>
> > +<br>
> > /* Try free list first. */<br>
> > - if (anv_free_list_pop(&pool->free<wbr>_list, &pool->map, &offset)) {<br>
> > + if (anv_free_list_pop(&(pool->fre<wbr>e_list[n_blocks - 1]), &pool->map, &offset)) {<br>
> > assert(offset >= 0);<br>
> > assert(pool->map);<br>
> > return offset;<br>
> > }<br>
> ><br>
> > - return anv_block_pool_alloc_new(pool, &pool->state);<br>
> > + /* Try to steal them. */<br>
> > + for (unsigned int i = n_blocks; i < ANV_MAX_BLOCKS; i++) {<br>
> > + if (anv_free_list_pop (&(pool->free_list[i]), &pool->map, &offset)) {<br>
> > + assert(offset >= 0);<br>
> > + assert(pool->map);<br>
> > + /* Just return the blocks we do not require */<br>
> > + int32_t needless_blocks = i + 1 - n_blocks;<br>
> > + int32_t needless_offset = offset + n_blocks * pool->block_size;<br>
> > + anv_free_list_push(&(pool->fr<wbr>ee_list[needless_blocks - 1]), pool->map, needless_offset);<br>
><br>
> I really like this. That way one-shot giant blocks don't stay around forever when we need piles of little ones. We have no path for defragmentation, but I think that's ok.<br>
> <br>
> > + return offset;<br>
> > + }<br>
> > + }<br>
> > +<br>
> > + return anv_block_pool_alloc_new(pool, &pool->state, n_blocks);<br>
> > }<br>
> ><br>
> > /* Allocates a block out of the back of the block pool.<br>
> > @@ -564,7 +592,7 @@ anv_block_pool_alloc_back(stru<wbr>ct anv_block_pool *pool)<br>
> > return offset;<br>
> > }<br>
> ><br>
> > - offset = anv_block_pool_alloc_new(pool, &pool->back_state);<br>
> > + offset = anv_block_pool_alloc_new(pool, &pool->back_state, 1);<br>
> ><br>
> > /* The offset we get out of anv_block_pool_alloc_new() is actually the<br>
> > * number of bytes downwards from the middle to the end of the block.<br>
> > @@ -576,12 +604,14 @@ anv_block_pool_alloc_back(stru<wbr>ct anv_block_pool *pool)<br>
> > }<br>
> ><br>
> > void<br>
> > -anv_block_pool_free(struct anv_block_pool *pool, int32_t offset)<br>
> > +anv_block_pool_free(struct anv_block_pool *pool, int32_t offset, uint32_t n_blocks)<br>
> > {<br>
> > + assert(n_blocks >= 1 && n_blocks <= ANV_MAX_BLOCKS);<br>
> > +<br>
> > if (offset < 0) {<br>
> > anv_free_list_push(&pool->bac<wbr>k_free_list, pool->map, offset);<br>
> > } else {<br>
> > - anv_free_list_push(&pool->free<wbr>_list, pool->map, offset);<br>
> > + anv_free_list_push(&(pool->fre<wbr>e_list[n_blocks - 1]), pool->map, offset);<br>
> > }<br>
> > }<br>
> ><br>
> > @@ -698,6 +728,9 @@ struct anv_state_stream_block {<br>
> > /* The offset into the block pool at which this block starts */<br>
> > uint32_t offset;<br>
> ><br>
> > + /* Blocks allocated */<br>
> > + uint32_t n_blocks;<br>
> > +<br>
> > #ifdef HAVE_VALGRIND<br>
> > /* A pointer to the first user-allocated thing in this block. This is<br>
> > * what valgrind sees as the start of the block.<br>
> > @@ -736,7 +769,7 @@ anv_state_stream_finish(struct anv_state_stream *stream)<br>
> > struct anv_state_stream_block sb = VG_NOACCESS_READ(next);<br>
> > VG(VALGRIND_MEMPOOL_FREE(stre<wbr>am, sb._vg_ptr));<br>
> > VG(VALGRIND_MAKE_MEM_UNDEFINE<wbr>D(next, block_size));<br>
> > - anv_block_pool_free(stream->bl<wbr>ock_pool, sb.offset);<br>
> > + anv_block_pool_free(stream->bl<wbr>ock_pool, sb.offset, sb.n_blocks);<br>
> > next = sb.next;<br>
> > }<br>
> ><br>
> > @@ -753,19 +786,23 @@ anv_state_stream_alloc(struct anv_state_stream *stream,<br>
> ><br>
> > state.offset = align_u32(stream->next, alignment);<br>
> > if (state.offset + size > stream->end) {<br>
> > - uint32_t block = anv_block_pool_alloc(stream->b<wbr>lock_pool);<br>
> > + uint32_t n_blocks =<br>
> > + DIV_ROUND_UP(state.offset - stream->next + size, stream->block_pool->block_size<wbr>);<br>
> > + uint32_t block = anv_block_pool_alloc_n(stream-<wbr>>block_pool, n_blocks);<br>
> > +<br>
> > sb = stream->block_pool->map + block;<br>
> ><br>
> > VG(VALGRIND_MAKE_MEM_UNDEFINE<wbr>D(sb, sizeof(*sb)));<br>
> > sb->next = stream->block;<br>
> > sb->offset = block;<br>
> > + sb->n_blocks = n_blocks;<br>
> > VG(sb->_vg_ptr = NULL);<br>
> > - VG(VALGRIND_MAKE_MEM_NOACCESS(<wbr>sb, stream->block_pool->block_size<wbr>));<br>
> > + VG(VALGRIND_MAKE_MEM_NOACCESS(<wbr>sb, n_blocks * stream->block_pool->block_size<wbr>));<br>
> ><br>
> > stream->block = sb;<br>
> > stream->start = block;<br>
> > stream->next = block + sizeof(*sb);<br>
> > - stream->end = block + stream->block_pool->block_size<wbr>;<br>
> > + stream->end = block + n_blocks * stream->block_pool->block_size<wbr>;<br>
> ><br>
> > state.offset = align_u32(stream->next, alignment);<br>
> > assert(state.offset + size <= stream->end);<br>
> > diff --git a/src/intel/vulkan/anv_batch_c<wbr>hain.c b/src/intel/vulkan/anv_batch_c<wbr>hain.c<br>
> > index 3f6039e..cc9d9d7 100644<br>
> > --- a/src/intel/vulkan/anv_batch_c<wbr>hain.c<br>
> > +++ b/src/intel/vulkan/anv_batch_c<wbr>hain.c<br>
> > @@ -716,7 +716,7 @@ anv_cmd_buffer_fini_batch_bo_c<wbr>hain(struct anv_cmd_buffer *cmd_buffer)<br>
> > int32_t *bt_block;<br>
> > u_vector_foreach(bt_block, &cmd_buffer->bt_blocks) {<br>
> > anv_block_pool_free(&cmd_buff<wbr>er->device->surface_state_bloc<wbr>k_pool,<br>
> > - *bt_block);<br>
> > + *bt_block, 1);<br>
> > }<br>
> > u_vector_finish(&cmd_buffer->b<wbr>t_blocks);<br>
> ><br>
> > @@ -750,7 +750,7 @@ anv_cmd_buffer_reset_batch_bo_<wbr>chain(struct anv_cmd_buffer *cmd_buffer)<br>
> > while (u_vector_length(&cmd_buffer-><wbr>bt_blocks) > 1) {<br>
> > int32_t *bt_block = u_vector_remove(&cmd_buffer->b<wbr>t_blocks);<br>
> > anv_block_pool_free(&cmd_buff<wbr>er->device->surface_state_bloc<wbr>k_pool,<br>
> > - *bt_block);<br>
> > + *bt_block, 1);<br>
> > }<br>
> > assert(u_vector_length(&cmd_bu<wbr>ffer->bt_blocks) == 1);<br>
> > cmd_buffer->bt_next = 0;<br>
> > diff --git a/src/intel/vulkan/anv_private<wbr>.h b/src/intel/vulkan/anv_private<wbr>.h<br>
> > index 7682bfc..bf92d64 100644<br>
> > --- a/src/intel/vulkan/anv_private<wbr>.h<br>
> > +++ b/src/intel/vulkan/anv_private<wbr>.h<br>
> > @@ -339,6 +339,8 @@ struct anv_block_state {<br>
> > };<br>
> > };<br>
> ><br>
> > +#define ANV_MAX_BLOCKS 256<br>
> > +<br>
> > struct anv_block_pool {<br>
> > struct anv_device *device;<br>
> ><br>
> > @@ -370,7 +372,7 @@ struct anv_block_pool {<br>
> ><br>
> > uint32_t block_size;<br>
> ><br>
> > - union anv_free_list free_list;<br>
> > + union anv_free_list free_list[ANV_MAX_BLOCKS];<br>
> > struct anv_block_state state;<br>
> ><br>
> > union anv_free_list back_free_list;<br>
> > @@ -462,8 +464,9 @@ VkResult anv_block_pool_init(struct anv_block_pool *pool,<br>
> > struct anv_device *device, uint32_t block_size);<br>
> > void anv_block_pool_finish(struct anv_block_pool *pool);<br>
> > int32_t anv_block_pool_alloc(struct anv_block_pool *pool);<br>
> > +int32_t anv_block_pool_alloc_n(struct anv_block_pool *pool, uint32_t n_blocks);<br>
> > int32_t anv_block_pool_alloc_back(stru<wbr>ct anv_block_pool *pool);<br>
> > -void anv_block_pool_free(struct anv_block_pool *pool, int32_t offset);<br>
> > +void anv_block_pool_free(struct anv_block_pool *pool, int32_t offset, uint32_t n_blocks);<br>
> > void anv_state_pool_init(struct anv_state_pool *pool,<br>
> > struct anv_block_pool *block_pool);<br>
> > void anv_state_pool_finish(struct anv_state_pool *pool);<br>
> > --<br>
> > 2.9.3<br>
> ><br>
> ><br>
><br>
><br>
</div></div></blockquote></div></div></div><br></div></div>
</blockquote></div><br></div></div>