[gst-devel] receiving and displaying video streams in a non-GTK application

Stefan Kost ensonic at hora-obscura.de
Thu Sep 30 16:32:03 CEST 2010


On 29.09.2010 17:21, Paul E. Rybski wrote:
> Hi,
> 	I've just recently discovered gstreamer and have been exploring its use
> for transmitting audio and video from one linux machine to another.  So
> far I've only been exploring the use of gst-launch but now want to try
> to receive the video stream in a separate application rather than using
> xvimagesink.  This is complicated by the fact that I'm using FLTK
> (http://www.fltk.org/) rather than GTK for the GUI.  (The reasons for
> using FLTK are essentially historical and it's currently impractical for
> me to consider throwing away my old legacy GUI code and rewriting it
> from scratch in GTK.)  I can see two different paths that I can try to
> follow to achieve my goal:
>
> 1) My first option is to try to integrate the gstreamer API directly
> into my fltk application.  I've started to look at the documentation for
> how to encode gstreamer pipelines in a C application but one thing that
> currently escapes me is how I get access to the raw uncompressed frames
> of video at the end of the pipeline.  The way I understand it, I should
> be able to encode my pipeline so that the application receives the video
> stream from a socket and decodes it (I'm using smokeenc) but then I'm
> completely unclear as to how I might copy the image into a buffer that I
> can feed into an FLTK widget for drawing.  I'm also completely unclear
> how easy or difficult it would be to integrate the GTK main event loop
> with the FLTK main event loop as the gstreamer API seems to be heavily
> wedded to GTK.  I have no experience programming with GTK at the moment
> either.
>   
If there is a drawable widget in fltk that is backed by a xwindow then
you should be able to use the xoverlay interface just fine. A quick
google search turned up this:
http://www.fltk.org/doc-1.0/osissues.html


        Window fl_xid(const Fl_Window *)

Stefan

> 2) My second option is to keep the client gst-launch command as it
> stands now but instead of piping the video to xvimagesink, I create a
> new local socket (or pipe) and shove the frames of video into those
> (perhaps encoded as jpegs) and then have my FLTK application receive the
> data from this pipe, decode each jpeg, and display it.  This seems
> somewhat easier to achieve because then all I need to do is to figure
> out how the data is encoded into the socket so I can write the code to
> decode it.
>
> Any thoughts, advice, or experiences that people could share with this?
>  I'd kind of like to do the first option because it's conceptually
> simpler for the end-user of my system but I'm concerned that I might end
> up needing to rewrite my entire GUI in GTK which I'd rather not have to
> do at this time.
>
> Here are the gst-launch commands that I'm using right now.
>
> Server:
>
> gst-launch-0.10 -vm oggmux name=mux ! filesink location=movie.ogg v4lsrc
> ! video/x-raw-yuv,width=320,height=240 ! tee name=t_vnet ! queue !
> ffmpegcolorspace ! smokeenc qmin=1 qmax=50 ! udpsink port=5000
> host=localhost sync=false t_vnet. ! queue ! videorate !
> 'video/x-raw-yuv' ! theoraenc ! mux. alsasrc device=hw:0,0 !
> audio/x-raw-int,rate=48000,channels=2,depth=16 ! tee name=t_anet ! queue
> ! audioconvert ! flacenc ! udpsink port=4000 host=localhost sync=false
> t_anet. !queue ! audioconvert ! vorbisenc ! mux.
>
>
> Client:
>
> gst-launch-0.10 -vm tee name=vid -vm udpsrc port=5000 ! smokedec !
> xvimagesink vid. !tee name=aud udpsrc port=4000 ! flacdec ! audioconvert
> ! audioresample ! alsasink sync=false aud.
>
>
> I'm on Ubuntu 8.04 LTS 64-bit using the gstreamer packages that come
> with that distro.  I've found that these commands also work for me on
> Ubuntu 10.4 LTS 64-bit.
>
> Thanks,
>
> -Paul
>
>   





More information about the gstreamer-devel mailing list