[gst-devel] receiving and displaying video streams in a non-GTK application

Andrey Nechypurenko andreynech at googlemail.com
Wed Sep 29 20:04:56 CEST 2010


Hi Paul,

> 1) My first option is to try to integrate the gstreamer API directly
> into my fltk application.  I've started to look at the documentation for
> how to encode gstreamer pipelines in a C application but one thing that
> currently escapes me is how I get access to the raw uncompressed frames
> of video at the end of the pipeline.  The way I understand it, I should
> be able to encode my pipeline so that the application receives the video
> stream from a socket and decodes it (I'm using smokeenc) but then I'm
> completely unclear as to how I might copy the image into a buffer that I
> can feed into an FLTK widget for drawing.

I would suggest to take a look at appsink[1,2] and fakesink[3]
elements. Somehow I feel like appsink is the preffered way. However,
fakesink could be used as well with its hand-off mechanism.

> I'm also completely unclear
> how easy or difficult it would be to integrate the GTK main event loop
> with the FLTK main event loop as the gstreamer API seems to be heavily
> wedded to GTK.  I have no experience programming with GTK at the moment
> either.

I think the simplies way would be to run gtk (gstreamer) event loop in
a separate thread. Using appsink or fakesink mentioned above, you will
get access to raw frame. Then you will need to implement thread safe
mechanism to pass raw buffers from gstreamer thread to your UI thread.
For example, there is a set of examples on how to integrate Qt with
gstreamer [4] where similar technique is used. In particular,
qglwtextureshare shows how to run gstreamer event loop in separate
thread and interact with Qt GUI (please note, that this example uses
GL texture sharing mechanism instead of passing raw buffers through
memory buffers). In addition, this example illustrates how to easily
construct the pipeline in essentially the same way as with gst-launch
using gst_parse_launch() function.

> 2) My second option is to keep the client gst-launch command as it
> stands now but instead of piping the video to xvimagesink, I create a
> new local socket (or pipe)

I personally would not suggest such approach because of greater
complexity compared to the first one.

> Any thoughts, advice, or experiences that people could share with this?

As I understand, you are working on robotics and remotely controlled
vehicles. That is why, it might be interesting for you to take a look
at this project [6,7]. Here you can find the complete example how to
control vehicle over wireless/internet. In particular there is
gstreamer based video capturing and encoding from on-board camera,
networking infrastructure to stream video/sensor data to driver
cockpit and transmit control signals back to the vehicle using Ice
middleware [8]. In addition there is SDL/OpenGL based UI to display
live video with hardware acceleration which uses the
appsink/fakesink/thread approach I mentioned above.

Regards,
Andrey.

[1] http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-appsink.html
[2] http://cgit.freedesktop.org/gstreamer/gst-plugins-base/tree/tests/examples/app?id=e17b42181c2cbcc389f87a35539f7a1b07d3dd54
[3] http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-plugins/html/gstreamer-plugins-fakesink.html
[4] http://cgit.freedesktop.org/gstreamer/gst-plugins-gl/tree/tests/examples/qt?id=fab824ea01f43c3fecaa2fed5e9e828774db5b24
[5] http://cgit.freedesktop.org/gstreamer/gst-plugins-gl/tree/tests/examples/qt/qglwtextureshare?id=fab824ea01f43c3fecaa2fed5e9e828774db5b24
[6] http://www.gitorious.org/veter/pages/Home
[7] http://veter-project.blogspot.com/
[8] http://www.zeroc.com




More information about the gstreamer-devel mailing list