[Bug 727886] GstVideoGLTextureUploadMeta assumes no color conversion is done during upload
GStreamer (bugzilla.gnome.org)
bugzilla at gnome.org
Thu Apr 10 08:06:57 PDT 2014
https://bugzilla.gnome.org/show_bug.cgi?id=727886
GStreamer | gst-plugins-base | git
--- Comment #14 from Carlos Rafael Giani <dv at pseudoterminal.org> 2014-04-10 15:06:50 UTC ---
I do not follow. Why does downstream need to know anything? The reported issue
is about an *internal* behavior of a GLUploadMeta implementation.
Lets say a plugin specifically written for hardware which supports hardwired
color format conversion in the GPU. In that case, I would expect the following
to happen:
1. Upstream creates GstBuffers with this GLmeta.
2. An element (lets say a sink) which supports this meta looks at it, and gets
a texture from it. It also calls something like get_gl_video_info() to see how
it should interpret that texture. In the hardwired GPU case, that video info
contains RGB as video format, even though the GstBuffer caps are set to I420
(the GPU can do the I420->RGB conversion by itself). The sink now knows
everything it needs to know - the data is to be interpreted as RGB pixels, so
it just uses a dummy shader that passes through pixels.
3. In another case, a buffer is passed around, but it has a format that cannot
be converted by the hardwired GPU conversion. In that case, get_gl_video_info()
returns the same format as specified in the GstBuffer caps. The sink now knows
that it needs to use a shader to convert the pixel format to RGB.
4. In yet another case (for example, when branching with tee), an element which
does not care about the meta maps the buffer. Everything is as usual, nothing
behaves differently.
I do not see how some set_caps() functions and queries are relevant here. This
is *not* about allocators or buffer pools.
> VideoInfo stride/offset/size is for mappable memory. When using the upload
> meta, you should not need that information, since the actual upload (often not
> really an upload) is done by the buffer owner/producer.
You do need it if you want to be able to use the buffer contents as they are.
This is particularly relevant for buffers which use physically contiguous
memory that can be used as a backing store for buffers directly. I had this
issue when trying to render HW-decoded video frames with the Vivante GPU direct
texture extension. I added the number of padding rows to the height of the
frame, and used that as the texture height. Then, in the shader, I modified the
UV coordinates so that the additional padding rows are not rendered. The
problem is that the extension did not allow for specifying padding explicitely,
and expected I420 planes to be placed one after the other directly.
--
Configure bugmail: https://bugzilla.gnome.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.
More information about the gstreamer-bugs
mailing list