[Mesa-dev] [PATCH v2 22/37] panfrost: Extend the panfrost_batch_add_bo() API to pass access flags

Alyssa Rosenzweig alyssa at rosenzweig.io
Mon Sep 16 13:59:15 UTC 2019


PAN_BO_GPU_ACCESS_* is rather wordy. We're a GPU driver, of course it's
GPU access :)

Could we just do PAN_BO_ACCESS_* instead?

>  static mali_ptr
>  panfrost_upload_tex(
>          struct panfrost_context *ctx,
> +        enum pipe_shader_type st,
>          struct panfrost_sampler_view *view)
>  {
>          if (!view)
> @@ -610,7 +611,11 @@ panfrost_upload_tex(
>  
>          /* Add the BO to the job so it's retained until the job is done. */
>          struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
> -        panfrost_batch_add_bo(batch, rsrc->bo);
> +        panfrost_batch_add_bo(batch, rsrc->bo,
> +                              PAN_BO_GPU_ACCESS_SHARED | PAN_BO_GPU_ACCESS_READ |
> +                              PAN_BO_GPU_ACCESS_VERTEX_TILER |
> +                              (st == PIPE_SHADER_FRAGMENT ?
> +                               PAN_BO_GPU_ACCESS_FRAGMENT : 0));

I'm not sure this is quite right... should it maybe be:

(st == PIPE_SHADER_FRAGMENT ? PAN_BO_ACCESS_FRAGMENT :
PAN_BO_ACCESS_VERTEX_TILER)

I.e., if it's accessed from the fragment shader, is it necessarily
needed for the vertex/tiler part?

> -        panfrost_batch_add_bo(batch, bo);
> +        panfrost_batch_add_bo(batch, bo,
> +                              PAN_BO_GPU_ACCESS_SHARED | PAN_BO_GPU_ACCESS_RW |
> +                              PAN_BO_GPU_ACCESS_VERTEX_TILER |
> +                              (st == PIPE_SHADER_FRAGMENT ?
> +                               PAN_BO_GPU_ACCESS_FRAGMENT : 0));

Ditto. We should maybe have a `pan_bo_access_for_stage(enum
pipe_shader_type)` to abstract this logic.

> @@ -803,7 +813,12 @@ panfrost_map_constant_buffer_gpu(
>          struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
>  
>          if (rsrc) {
> -                panfrost_batch_add_bo(batch, rsrc->bo);
> +                panfrost_batch_add_bo(batch, rsrc->bo,
> +                                      PAN_BO_GPU_ACCESS_SHARED |
> +                                      PAN_BO_GPU_ACCESS_READ |
> +                                      PAN_BO_GPU_ACCESS_VERTEX_TILER |
> +                                      (st == PIPE_SHADER_FRAGMENT ?
> +                                       PAN_BO_GPU_ACCESS_FRAGMENT : 0));

Ditto.

>          if (!info->has_user_indices) {
>                  /* Only resources can be directly mapped */
> -                panfrost_batch_add_bo(batch, rsrc->bo);
> +                panfrost_batch_add_bo(batch, rsrc->bo,
> +                                      PAN_BO_GPU_ACCESS_FRAGMENT);
>                  return rsrc->bo->gpu + offset;

The index buffer is to determine geometry, so it is definitely accessed
from the vertex/tiler chain.

I'm not sure if it's also accessed by the fragment chain. Also, should
this have ACCESS_SHARED | ACCESS_READ to be consistent with the others?

> @@ -69,7 +69,10 @@ panfrost_emit_streamout(
>          /* Grab the BO and bind it to the batch */
>          struct panfrost_batch *batch = panfrost_get_batch_for_fbo(ctx);
>          struct panfrost_bo *bo = pan_resource(target->buffer)->bo;
> -        panfrost_batch_add_bo(batch, bo);
> +        panfrost_batch_add_bo(batch, bo,
> +                              PAN_BO_GPU_ACCESS_SHARED |
> +                              PAN_BO_GPU_ACCESS_WRITE |
> +                              PAN_BO_GPU_ACCESS_VERTEX_TILER);

We operate somewhat like:

[ Vertices ] -- vertex shader --> [ Varyings ] -- tiler --> [ Geometry ]

So varyings are WRITE from the perspective of the VERTEX but READ from
the perspective of the TILER and FRAGMENT.

Now, this is for streamout. However, streamout does not imply rasterize
discard. Hence, it is legal to have streamout and also render that
geometry with a FRAGMENT job. So it's premature to drop the READ and
FRAGMENT flags (this will presumably regress a bunch of dEQP-GLES3 tests
for streamout).


More information about the mesa-dev mailing list