[Bug 727886] GstVideoGLTextureUploadMeta assumes no color conversion is done during upload

GStreamer (bugzilla.gnome.org) bugzilla at gnome.org
Thu Apr 10 09:12:21 PDT 2014


https://bugzilla.gnome.org/show_bug.cgi?id=727886
  GStreamer | gst-plugins-base | git

--- Comment #16 from Nicolas Dufresne <nicolas.dufresne at collabora.co.uk> 2014-04-10 16:12:16 UTC ---
(In reply to comment #14)
> I do not see how some set_caps() functions and queries are relevant here. This
> is *not* about allocators or buffer pools.

Metas are negotiated in the allocation query (not to confuse with allocators
and buffer pool). This happens after the buffer caps has been negotiated. The
idea is that the upstream element, that adds the TextureUploadMeta, may be able
to upload to multiple texture format. Downstream, which will provide the
textures, is most likely able to create texture for multiple formats too. The
format the uploader uses and the format the texture is set for need to match.
This means, information need to be shared from upstream (uploader) and
downstream (that prepare the destination texture) in order to make it work.

The allocation_query() approach is nice, because if there is no match, there is
no need to add UploadMeta on buffers. On certain stack, adding the UploadMeta
and not using it has certain cost. On other stack, it's the only way to display
the buffers, in this case it has to be in the caps negotiation.

> 
> > VideoInfo stride/offset/size is for mappable memory. When using the upload
> > meta, you should not need that information, since the actual upload (often not
> > really an upload) is done by the buffer owner/producer.
> 
> You do need it if you want to be able to use the buffer contents as they are.
> This is particularly relevant for buffers which use physically contiguous
> memory that can be used as a backing store for buffers directly. I had this
> issue when trying to render HW-decoded video frames with the Vivante GPU direct
> texture extension. I added the number of padding rows to the height of the
> frame, and used that as the texture height. Then, in the shader, I modified the
> UV coordinates so that the additional padding rows are not rendered. The
> problem is that the extension did not allow for specifying padding explicitely,
> and expected I420 planes to be placed one after the other directly.

This information is already in the buffer and the VideoMeta. All you had to do
is read it. When implementing that meta, you can access buffer and user_data
within the UploadMeta, it's private, but private in the sense that only
implementer should use it. We could improve the doc though. Next tim you think
your blocked, feel free to ask on IRC or mailing list.


Back to the subject:

In the display sink that support GlUpload, I'd suggest (and it should be
optionnal) to define a GstStructure format to describe the supported formats.

See API gst_query_add_allocation_meta().

This can then be read in the allocation query (in decide_allocation). From this
information, your element will decide to attach, or not, the TextureUploadMeta
(making sure all the information it needs is set, on the buffer, the memory,
the VideoMeta, or through the user_data).

In the UploadMeta, I would add a new member, let's say GstVideoFormat format,
to let the display element know how to allocate the texture. I suspect doing it
this way is a little more work, as you endup creating the texture after having
received the first buffer (unless a unique format is supported). But I don't
seen any better approach for now.

Unlike the capsfeature method, you must support the format describe in the
buffer, otherwise negotiation will fail.

That is my proposed plan, even though this is just a suggestion, more thinking
may bring up issues here.

-- 
Configure bugmail: https://bugzilla.gnome.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the QA contact for the bug.
You are the assignee for the bug.


More information about the gstreamer-bugs mailing list