[Mesa-dev] [PATCH 14/17] i965: Grow the batch/state buffers if we need space and can't flush.
Chris Wilson
chris at chris-wilson.co.uk
Wed Sep 6 10:08:44 UTC 2017
Quoting Kenneth Graunke (2017-09-06 01:09:47)
> Previously, we would just assert fail and die in this case. The only
> safeguard is the "estimated max prim size" checks when starting a draw
> (or compute dispatch or BLORP operation)...which are woefully broken.
>
> Growing is fairly straightforward:
>
> 1. Allocate a new larger BO.
> 2. memcpy the existing contents over to the new buffer
> 3. Set the new BO to the same GTT offset as the old BO. When emitting
> relocations, we write the presumed GTT offset of the target BO. If
> we changed it, we'd have to update all the existing values (by
> walking the relocation list and looking at offsets), which is more
> expensive. With the old BO freed, ideally the kernel could simply
> place the new BO at that offset anyway.
> 4. Update the validation list to contain the new BO.
> 5. Update the relocation list to have the GEM handle for the new BO
> (which we can skip if using I915_EXEC_HANDLE_LUT).
> ---
> src/mesa/drivers/dri/i965/intel_batchbuffer.c | 104 ++++++++++++++++++++++++--
> 1 file changed, 99 insertions(+), 5 deletions(-)
>
> diff --git a/src/mesa/drivers/dri/i965/intel_batchbuffer.c b/src/mesa/drivers/dri/i965/intel_batchbuffer.c
> index 909f56f9792..118f75c4d71 100644
> --- a/src/mesa/drivers/dri/i965/intel_batchbuffer.c
> +++ b/src/mesa/drivers/dri/i965/intel_batchbuffer.c
> @@ -43,6 +43,9 @@
> #define BATCH_SZ (8192*sizeof(uint32_t))
> #define STATE_SZ (8192*sizeof(uint32_t))
>
> +/* Don't exceed this - batchbuffers need to fit in the ring! */
I don't understand this comment. I probably just have the wrong pov, you
say ring and I then think of the legacy/lrc ringbuffer in the kernel.
> +#define MAX_BATCH_SIZE 65536
> +
> static void
> intel_batchbuffer_reset(struct intel_batchbuffer *batch,
> struct brw_bufmgr *bufmgr);
> @@ -228,6 +231,78 @@ intel_batchbuffer_free(struct intel_batchbuffer *batch)
> _mesa_hash_table_destroy(batch->state_batch_sizes, NULL);
> }
>
> +static void
> +replace_bo_in_reloc_list(struct brw_reloc_list *rlist,
> + uint32_t old_handle, uint32_t new_handle)
> +{
> + for (int i = 0; i < rlist->reloc_count; i++) {
> + if (rlist->relocs[i].target_handle == old_handle)
> + rlist->relocs[i].target_handle = new_handle;
> + }
> +}
> +
> +static void
> +grow_buffer(struct brw_context *brw,
> + struct brw_bo **bo_ptr,
> + uint32_t **map_ptr,
> + unsigned existing_bytes,
> + unsigned new_size)
> +{
> + struct intel_batchbuffer *batch = &brw->batch;
> + struct brw_bufmgr *bufmgr = brw->bufmgr;
> +
> + uint32_t *old_map = *map_ptr;
> + struct brw_bo *old_bo = *bo_ptr;
> +
> + struct brw_bo *new_bo = brw_bo_alloc(bufmgr, old_bo->name, new_size, 4096);
> + uint32_t *new_map = brw_bo_map(brw, new_bo, MAP_READ | MAP_WRITE);
> +
> + perf_debug("Growing %s - ran out of space\n", old_bo->name);
> +
> + /* Copy existing data to the new larger buffer */
> + memcpy(new_map, old_map, existing_bytes);
Needs the sse41 treatment.
> +
> + /* Try to put the new BO at the same GTT offset as the old BO (which
> + * we're throwing away, so it doesn't need to be there).
> + *
> + * This guarantees that our relocations continue to work: values we've
> + * already written into the buffer, values we're going to write into the
> + * buffer, and the validation/relocation lists all will match.
> + */
> + new_bo->gtt_offset = old_bo->gtt_offset;
> + new_bo->index = old_bo->index;
> +
> + /* Batch/state buffers are per-context, and if we've run out of space,
> + * we must have actually used them before, so...they will be in the list.
> + */
> + assert(old_bo->index < batch->exec_count);
> + assert(batch->exec_bos[old_bo->index] == old_bo);
> +
> + /* Update the validation list to use the new BO. */
> + batch->exec_bos[old_bo->index] = new_bo;
> + batch->validation_list[old_bo->index].handle = new_bo->gem_handle;
Nice touch.
> + brw_bo_reference(new_bo);
> + brw_bo_unreference(old_bo);
> +
> + if (!batch->use_batch_first) {
> + /* We're not using I915_EXEC_HANDLE_LUT, which means we need to go
> + * update the relocation list entries to point at the new BO as well.
> + * (With newer kernels, the "handle" is an offset into the validation
> + * list, which remains unchanged, so we can skip this.)
> + */
> + replace_bo_in_reloc_list(&batch->batch_relocs,
> + old_bo->gem_handle, new_bo->gem_handle);
> + replace_bo_in_reloc_list(&batch->state_relocs,
> + old_bo->gem_handle, new_bo->gem_handle);
> + }
> +
> + /* Drop the *bo_ptr reference. This should free the old BO. */
> + brw_bo_unreference(old_bo);
Ok, just took a double take even with the comment in place.
> + *bo_ptr = new_bo;
> + *map_ptr = new_map;
> +}
More information about the mesa-dev
mailing list