[gst-devel] [Gstreamer-openmax] how to pass custom metadata between gst elements
hd d
hdandroid at gmail.com
Tue Jun 29 23:30:43 CEST 2010
thanks for the response - yes - i am referring to linux - can you give a
more detailed call flow on what happens in user space and kernel space. How
does sink element pass memory region to decoder element in user space? omx
decoder element needs an fd, offset, length to mmap framebuffer memory into
user space or does this all happen in kernel space without any intervention
in userspace/framework. This applies to use cases like camera v4l2 src
sending data to omx video encoder in gst framework. if camera and omx vid
enc need to share the same physical memory, additional information about
memory region (like a buffer identifier or fd) needs to be exchanged between
user space elements in gst.
On Tue, Jun 29, 2010 at 3:38 AM, Felipe Contreras <
felipe.contreras at nokia.com> wrote:
> hdandroid at gmail.com wrote:
> > New to gst - need some help:
> >
> > what is the best way to pass meta data from one element to the other in
> gst
> > pipeline?
> > example use case: in a HW accelerated transcoding session, OMX video
> decoder
> > output buffer is sent to OMX video encode. decoder output is present
> in a
> > physically contiguous memory identified by a file descriptor and
> encode
> > needs to use this file descriptor to derive the physical address of
> > decoder's output buffer. OMX buffer header provides platform private
> field
> > to carry platform specific data.
> > Gst seems to define buffer meta data but what is the best place to
> embed
> > this information. Does gst have any platform private/reserved fields
> > (similar to OMX buffer hdr platform private data) to carry such
> information?
> >
> http://www.gstreamer.net/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstBuffer.html#GstBufferFlag
> > Another question is how does buffer from one element get passed to
> another
> > element when elements are in different processes? Or gst assume all
> elements
> > run in the same process.this cannot be practical scenario as usually
> display
> > resides in a different process.
>
> All GStreamer elements run in the same process. The display server might
> run in a separate process, but that's independent of GStreamer (i.e. the
> API is X11, not Gst).
>
> If you only want to provide custom data between two contiguous elements in
> the
> pipeline you have two options:
>
> 1) Provide a GstBuffer sub-type, like GstOmxBuffer. Then it should be
> easy to add any information that you want, but you should check that
> the GType is the right one.
>
> 2) Add a custom field to the GstCaps of the buffer.
>
> However, I don't see why you need a fd for contiguous memory. On OMAP3
> platform I have simple sink element that provides framebuffer memory
> (which is contiguous), and the video decoder element mmaps that memory.
> At kernel level the dspbridge driver is able to identify this memory as
> VM_IO, and the mmap operation is very fast. IOW; everything happens
> behind the scene; at kernel level.
>
> Are we talking about linux?
>
> --
> Felipe Contreras
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20100629/c80d91f7/attachment.htm>
More information about the gstreamer-devel
mailing list