[Mesa-dev] Batch buffer sizes, flushing questions

Paul Berry stereotype441 at gmail.com
Wed Oct 30 20:39:07 CET 2013

On 30 October 2013 11:55, Rogovin, Kevin <kevin.rogovin at intel.com> wrote:

> Hello all,
>   I've got some questions and I would appreciate if anyone could help me
> out. Here goes:
> I've been digging through brw_try_draw_prims(), and trying to figure out
> what it is doing, so far this is what I see:
>  1) it is essentially called each time a non-degenerate "real drawing"
> glDrawFoo() is called
>  2) it appends to the current batch buffer the necessary commands to the
> GPU to execute the draw call. This includes state change upload is
> essentially handled by brw_upload_state() which essentially walks atoms.
>  3) if the batch buffer gets full enough it is flushed.
> Now bits that confuse me:
> 1)  When I look at intel_batch_buffer_flush() I see that it adds a marker
> MI_BATCH_BUFFER_END (and possible a no-op marker to keep the size even) and
> then makes the DRM call, drm_intel_bo_subdata() and then unreferences the
> upload data. What I do not understand is where/how is the signal to kernel
> made to say the buffer should be processed.. is is just by uploading the
> data?

No.  do_flush_locked() (which is called by intel_batch_buffer_flush())
follows that by calling either drm_intel_bo_mrb_exec() or
drm_intel_gem_bo_context_exec().  That's what causes the batch to be queued
for execution.

> 2) I admit that I have not gone through the VBO module super fine-like,
> but when and where is nr_prims not one? The calls I have looked at have
> that value being one. What calls, if any, have that argument not as one?

nr_prims is sometimes != 1 when the client is using the legacy
glBegin()/glEnd() technique to emit primitives.  I don't recall the exact
circumstances that cause it to happen, but here's one example:


> 3) It appears that a batch buffer gets flushed if blorp is used,  glFlush
> is called or if it gets too full. Is there anything to say if the command
> looks heavy so that it should trigger a flush to make sure the GPU command
> queue is full-ish usually?

Not that I'm aware of.  My intuition is that since GL apps typically do a
very large number of small-ish draw calls, this wouldn't be beneficial most
of the time, and it would be tricky to tune the heuristics to make it
effective in the rare circumstances where it mattered without sacrificing
performance elsewhere.

> 4) Going further, is there any mechanism (and if so what is it) to say a
> batch has made its way through the gfx pipeline? Going further, fine
> details of making its way through. For example, if a memory region, say for
> attributes, is no longer read so it can be modified. I am looking at
> glBufferSubData and glTexSubImage calls on buffer and texture objects used
> by previous draw calls.

drm_intel_bo_busy() will tell if a buffer object is still being used by the
GPU.  Also, calling drm_intel_bo_map() on a buffer will cause the CPU to
wait until the GPU is done with the buffer.  (In the rare cases where we
want to map a buffer object without waiting for the GPU we use

> Also, what is the tangle for seeing if a query is ready? For example, an
> application can ask if a query is ready and if it is then get the value
> otherwise not ask for the value. Doing so can avoid a pipeline flush.

We implement this using drm_intel_bo_busy() to see whether the GPU has
finished using the buffer object containing the query result.  See the
implementation of gen6_check_query() for example.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/mesa-dev/attachments/20131030/4a798961/attachment-0001.html>

More information about the mesa-dev mailing list