[PATCH] drm: add FOURCC formats for compute dma_buf interop.

Daniel Vetter daniel at ffwll.ch
Wed Mar 19 03:31:10 PDT 2014


On Wed, Mar 19, 2014 at 7:30 AM, Gwenole Beauchesne <gb.devel at gmail.com> wrote:
> 2014-03-15 12:28 GMT+01:00 Daniel Vetter <daniel at ffwll.ch>:
>> On Sat, Mar 15, 2014 at 05:41:05AM +0100, Gwenole Beauchesne wrote:
>>> Hi,
>>>
>>> 2014-03-14 22:52 GMT+01:00 Daniel Vetter <daniel at ffwll.ch>:
>>> > On Fri, Mar 14, 2014 at 06:59:21PM +0100, Gwenole Beauchesne wrote:
>>> >> This is a follow-up to:
>>> >> http://lists.freedesktop.org/archives/mesa-dev/2014-March/055742.html
>>> >>
>>> >> Add formats meant to convey a "compute" dimension when a DRM fourcc
>>> >> format is needed for dma_buf interop (EGL, OpenCL).
>>> >>
>>> >> Intended FOURCC format: ('T',<num_elements>,<type>,<size_element>).
>>> >> - <num_elements>: number of elements in the tuple. Range: [0..3].
>>> >> - <type>: type of each element. Values: {'_','I','U','F'},
>>> >>   respectively for normalized to [0..1] range, signed integers,
>>> >>   unsigned integers, floating-point.
>>> >> - <size_element>: size of the element. Values: {1, 2, 4, 8}.
>>> >>
>>> >> All entities are represented in native-endian byte-order in memory.
>>> >> For example, 'T2F4' format would represent the (float, float) tuple
>>> >> where elements are stored in little-endian byte-order on x86.
>>> >>
>>> >> Signed-off-by: Gwenole Beauchesne <gwenole.beauchesne at intel.com>
>>> >
>>> > So this fourcc stuff started out as pixel formats for the ADDFB2 ioctl,
>>> > i.e. stuff that can be displayed. Hence my first question: How can we
>>> > display these bastards here?
>>>
>>> It's not meant to be displayed. The idea was maybe we could detect the
>>> fourcc (e.g. (fourcc & 0xff) == 'T') and reject the buffer accordingly
>>> when submitted to one of the display ioctl?
>>
>> Well we already have explicit lists for all that, so no need to add a pile
>> of formats just to be able to reject them ;-)
>>
>>> > So given all this: Why do we need this in the kernel's header? It sounds
>>> > like purely a userspace business with no need at all the enshrine this
>>> > into kernel ABI headers. Note that e.g. the mesa import/export interface
>>> > have their own fourcc #defines for exactly this reason.
>>>
>>> I could have stuffed everything in gbm.h for instance, or another new
>>> header, but the EXT_dma_buf_import extension actually mentions
>>> drm_fourcc.h. So, that's why I finally moved the definitions to there.
>>> :)
>>
>> That's a bit unfortunate ... But the spec also clearly states "as used by
>> the drm_mode_fb_cmd2 ioctl". And imo we can't make a case that this is
>> true.
>>
>>> What would be the better place? Can we make the userspace libdrm
>>> version of the file live totally unsynchronized from the kernel
>>> headers then?
>>
>> I think the right approach would be to ref the specification and also
>> allow other fourcc codes for non-display related buffer layaouts, maybe in
>> a different namespace. This entire fourcc stuff was only done since it was
>> a somewhat ill-defined "standard" for all kinds of YUV buffers. We've
>> already had to massivel pimp it to make it work properly with RGB.
>>
>> Pimping this even further to support all kinds of random compute/gl
>> buffers sounds ill-advised. I think we need to steal a new namespace from
>> some existing standard for all this, maybe ogl or ocl has something? Ofc
>> that means creating a new dma_buf_import atttribute for eglCreateImageKHR
>> which could be used instead of DRM_FOURCC_EXT.
>
> Thinking further about it, the patch is totally useless. It should be
> enough to amend the EXT_image_dma_buf_import spec to allow for using
> the GL format/type enums instead... Now, what are the chances to have
> the required changes to appear in a new revision of the spec? :)
>
> I'd suggest the following formalism:
> - If EGL_LINUX_DRM_FOURCC_EXT attribute is 0 (zero), then a GL
> format/type pair needs to be supplied to the <attrib_list>
> - Accepted as an attribute to <attrib_list> would be EGL_GL_FORMAT_EXT
> / EGL_GL_TYPE_EXT, where EGL_GL_FORMAT_EXT attribute value would be
> GL_{RED,RG,RGB,RGBA} as usual, and EGL_GL_TYPE_EXT value would be
> GL_{{UNSIGNED_,}{BYTE,SHORT,INT},{HALF_,}FLOAT}.
>
> Alternative would be to provide a unique "internal format", but this
> would complicate implementations, for sanity checks (too large switch
> cases).
>
> WDYT?

Sounds imo sane, but my useful knowledge sharply drops off a steep
cliff as soon as we leave the kernel. Pulling in the original authors
of the spec. Maybe we can just add this as a revision/clarification
... Iirc Jesse Barker isn't at linaro any more, but I don't have his
latest mail.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the dri-devel mailing list