[gst-devel] Viable strategy?

Tim-Philipp Müller t.i.m at zen.co.uk
Mon Sep 7 09:38:42 CEST 2009


On Fri, 2009-09-04 at 08:47 -0500, Kulecz, Walter wrote:

> I'd like some advice about how viable this pipeline layout would be and if I'm missing anything:
> 
> pipeline1: vrl2src->tee->queue->myImageProcessingPlugin->xvimagesink
>                 pipeline2:    \queue->mjpegencode->filesink
> 
> Pipeline1 would run continuously from program startup till termination
> for real-time monitoring and analysis.  Pipeline2 would be started and
> stopped as needed to record various epochs of interest.

Looks ok. The main problem with such setups is usually that if you link
in the second part mid-stream you won't get a newsegment event and
timestamps starting at a non-zero value, which often causes problem when
saving to a container format. If you're just dumping video frames as
MJPEG, that shouldn't be an issue though.

You might also want to have a look at camerabin from gst-plugins-bad.

> Any advice as to ximagesink vs. xvimagesink?  I'd like to display the
> video output within a gtk window, I've found a sample code that sort
> of works and uses ximagesink.

Both should behave pretty much the same with regard to GstXOverlay.
ximagesink usually requires RGB input and doesn't do scaling, whereas
xvimagesink accepts YUV and does scale. The downside with xvimagesink is
that you can usually only have one at a time (depending on
hardware/drivers), or can't even use it at all on some systems (if the
driver doesn't support it). ximagesink should always work as long as
you're using X11.

Cheers
 -Tim





More information about the gstreamer-devel mailing list