[Mesa-dev] GBM YUV planar support

Rob Clark robdclark at gmail.com
Thu Jun 2 18:31:37 UTC 2016


On Thu, Jun 2, 2016 at 1:51 PM, Rob Herring <robh at kernel.org> wrote:
> As discussed on irc yesterday, I've been looking at adding YUV planar
> (YV12) to GBM as it is a requirement for Android gralloc. It is
> possible to disable the requirement as that is what the Android
> emulator and android-x86 do. But that results in un-optimized s/w CSC.
>
> Given most/all Android targeted h/w can support YUV overlays (except
> virgl?), I only need to allocate, map, and import (to GBM only)
> buffers. The outputting of YUV would remain the responsibility of HWC
> (also a missing feature in drm_hwcomposer), and the gpu never touches
> the YUV buffers.
>
> With that, I see a couple of options:
>
> For allocation, at some level we need to translate to a single buffer
> perhaps using R8 format. This could be done in gralloc, GBM, gallium
> ST, or individual drivers. Also, somewhere we'd have to adjust stride
> or height. I don't know what assumptions like the U or V stride is
> half the Y stride are acceptable? Trying to propagate per plane stride
> and offsets all the way down to the drivers looks difficult.
>
> Then for importing, we can translate the planes to R8/GR88 and use the
> import support Stanimir is doing[1]. Again, the question is at what
> level to do this: either gralloc or GBM? The complicating factor here
> is I don't think we want to end up with 2 GBM BOs. So maybe GBM BOs
> need to support multiple DRIImages? However, it seems that we're
> creating 2 ways to import planar buffers either as a single DRIimage
> with planes (as i965 does) or a DRIimage per plane.

hmm, 2 gbm bo's (and 2 DRIImages) seems kind of ideal if you actually
did want to use them w/ gl (as two textures, doing CSC in shader)..
although I'm not sure to what extent that breaks the android world (if
it was exposed in GL as two textures).

AFAIU 99% of the time, in practice, android puts YUV on the screen via
overlay, but I'd be curious to see, for example, what the shaders used
for window transitions look like.  Depending on to what extent it's
possible to change android to support the R8+RG88 + frag shader that
does CSC, we might end up with no choice but to add direct support for
YUV..

> Another option is make gralloc open both render and card nodes using
> the card GBM device to just allocate dumb buffers for YUV buffers.
> This complicates gralloc a bit and the answer is always don't use dumb
> buffers. :) However, the assumption here is the buffers are just
> scanout buffers.

Note fwiw, what we were doing on linux was actually allocating from
the v4l viddec device (and importing in to mesa).  In fact I remember
there were some problems going in the other direction.. some specific
pitch requirements for the UV plane, or something like that.  And also
some really strange corruption (not sure if v4l dmabuf export is
broken when the device has an iommu??  maybe it was giving the dmabuf
importer device page addresses instead of physical addresses??)

Possibly that is an argument for actually allocating video decode
buffers from the v4l device directly?

BR,
-R

> Any feedback on direction would be helpful.
>
> Rob
>
> [1] https://lists.freedesktop.org/archives/mesa-dev/2016-May/117528.html


More information about the mesa-dev mailing list