[Mesa-dev] [PATCH 4/4] i965: Use sendc for all render target writes on Gen6+.
Kenneth Graunke
kenneth at whitecape.org
Wed Jul 25 10:24:57 PDT 2012
On 07/25/2012 07:20 AM, Paul Berry wrote:
> The sendc instruction causes the fragment shader thread to wait for
> any dependent threads (i.e. threads rendering to overlapping pixels)
> to complete before sending the message. We need to use sendc on the
> first render target write in order to guarantee that fragment shader
> outputs are written to the render target in the correct order.
>
> Previously, we only used the "sendc" instruction when writing to
> binding table index 0. This did the right thing for fragment shaders,
> because our fragment shader back-ends always issue their first render
> target write to binding table index 0. However, it did the wrong
> thing for blorp, which performs its render target writes to binding
> table index 1.
>
> A more robust solution is to use sendc for all render target writes.
> This should not produce any performance penalty, since after the first
> sendc, all of the dependent threads will have completed.
>
> For more information about sendc, see the Ivy Bridge PRM, Vol4 Part3
> p218 (sendc - Conditional Send Message), and p54 (TDR Registers).
> ---
> src/mesa/drivers/dri/i965/brw_eu_emit.c | 2 +-
> 1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/src/mesa/drivers/dri/i965/brw_eu_emit.c b/src/mesa/drivers/dri/i965/brw_eu_emit.c
> index 93e84ae..25bf91b 100644
> --- a/src/mesa/drivers/dri/i965/brw_eu_emit.c
> +++ b/src/mesa/drivers/dri/i965/brw_eu_emit.c
> @@ -2259,7 +2259,7 @@ void brw_fb_WRITE(struct brw_compile *p,
> else
> dest = retype(vec8(brw_null_reg()), BRW_REGISTER_TYPE_UW);
>
> - if (intel->gen >= 6 && binding_table_index == 0) {
> + if (intel->gen >= 6) {
> insn = next_insn(p, BRW_OPCODE_SENDC);
> } else {
> insn = next_insn(p, BRW_OPCODE_SEND);
For the series (pithy comments aside):
Reviewed-by: Kenneth Graunke <kenneth at whitecape.org>
Please wait for Eric's ack on patch 4 before pushing that, though.
More information about the mesa-dev
mailing list