[Bug 697112] Revamp GstSurfaceMeta

GStreamer (bugzilla.gnome.org) bugzilla at gnome.org
Wed Apr 10 10:09:15 PDT 2013


https://bugzilla.gnome.org/show_bug.cgi?id=697112
  GStreamer | gst-plugins-base | git

Gwenole Beauchesne <gb.devel> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Priority|Normal                      |High
             Status|NEW                         |RESOLVED
         Resolution|                            |INCOMPLETE
           Severity|blocker                     |critical

--- Comment #6 from Gwenole Beauchesne <gb.devel at gmail.com> 2013-04-10 17:09:11 UTC ---
Some background about my request, which actually comes from other projects like
clutter-gst or xbmc. Basically, they want a way to expose each individual plane
as a separate texture, and actually an EGLImage, so that they can easily fit
into an existing rendering pipeline that you would have e.g. for SW decoded
frames.

The original concern was that OES_EGL_image_external incurs black magic to the
driver and the extension does not know about the underlying structure
(progressive, top/bottom-field), color conversion matrix, etc. Sure, there are
extensions to that... extension (e.g. from TI), but nothing standardized yet,
AFAIK.

So, the idea is to negotiate the capabilities from the provider but also from
the consumer. The provider (decoder) may have hard constraints, e.g. it can
only emit to NV12, and you can't make it render to "foreign" storage. This
means that the provider is responsible for allocating the storage in this case.
The consumer (renderer) may also have additional constraints or capabilities,
depending on the underlying hardware.

I foresee the following usage models:

1) Hardware/driver can sample from YUV textures, i.e. a single texture
represents the decoded frame.
a. The renderer bears default color conversion matrices and doesn't care of
interlaced contents ;
b. The renderer can use additional extensions to control those ;
c. The renderer will be able to get Y/U/V/A components as R/G/B/A components.

(a) and (b) are driver-side, (c) could be an application-side usage model.

2) Hardware/driver can't sample from YUV textures, i.e. you have a texture per
plane (obsolete GL_LUMINANCE/GL_LUMINANCE_ALPHA, or modern EXT_texture_rg). If
you have NV12 format, you would need 2 textures: 1 for the Y plane where Y is
mapped to r component, 1 for the UV plane where U/V are mapped to r/g
components. This is more flexible as you can handle bob-deinterlacing, or
control color conversion matrix with standard GLSL if there is no appropriate
extension for it.

3) Fallback: the storage is in RGB/RGBA format. In this case, the conversion
from YUV to RGB is either explicit (upload in client application), or implicit
(case 1.a -- in driver/HW).

I think we probably could expose as many meta as we have supported combos.
Note: in my vision of things, the entity being transported would be EGLImage(s)
instead of texture ids.

-- 
Configure bugmail: https://bugzilla.gnome.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.


More information about the gstreamer-bugs mailing list