building a gstreamer pipeline in C

Michael Gruner michael.gruner at ridgerun.com
Fri May 17 17:46:35 UTC 2019


Hi

There is no reason why a C app would have less lag/latency than a gst-launch-1.0 pipeline (assuming the pipes are exactly the same). After all, gst-launch-1.0 is a C application as well. Network streaming will typically have more latency than local display, and that’s the reason gst-launch-1.0 seems more laggy.

Here’s a tip that may simplify your application development. Take a look at gst_parse_launch, you can pass in the same line as in gst-launch-1.0 and it will handle all the complexities automatically. In fact, that’s what gst-launch-1.0 uses underneath.

https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstParse.html#gst-parse-launch

Roughly, it would look something like:

const gchar * description = "uridecodebin uri=rtsp://192.168.x.y:554/user=admin_password=XXXXXXXX_channel=1_stream=0.sdp?real_stream name=d ! queue ! theoraenc ! oggmux name=m ! tcpserversink host=192.168.q.r port=8080 d. ! queue ! audioconvert ! audioresample ! flacenc ! m."

GError *error = NULL;
GstElement * pipeline = gst_parse_launch (description, &error);

If (!pipeline) {
	g_printerr (“Unable to create pipeline: %s\n”, error->message);
	g_error_free (error);
}

Hope it helps!

Michael
www.ridgerun.com


> On May 17, 2019, at 10:49 AM, R C <cjvijf at gmail.com> wrote:
> 
> Hello all,
> 
> 
> I am using the examples from the Gstreamer basic tutorial to see if I can build a c program that streams  some IP cameras.
> 
> The reason why I want to use C code is that is seems the ryn 'faster', less lag/latency.  I have a working example that streams the camera to a "gstreamer window" and the timestamps on the stream are only 3-4 secs behind, while running the same stream with the gst launch cmd line (into browser) is about 20 - 30 secs behind.
> 
> (although I am streaming into a web page with gst-launch
> 
> 
> This is the gst-launch pipeline I am using:
> 
> gst-launch-1.0 uridecodebin uri=rtsp://192.168.x.y:554/user=admin_password=XXXXXXXX_channel=1_stream=0.sdp?real_stream name=d ! queue ! theoraenc ! oggmux name=m ! tcpserversink host=192.168.q.r port=8080 d. ! queue ! audioconvert ! audioresample ! flacenc ! m.
> 
> 
> Being a rookie, using gstreamer, I assume that he names, d and m, are used to ID the video and audio stream?
> 
> 
> I "adapted"  a gstreamer example a little and so far I can stream the video and audio to a gstreamer window., like (some excerpts) :
> 
> // Create the elements
> data.source = gst_element_factory_make("uridecodebin", "source");
> data.audioconvert = gst_element_factory_make("audioconvert", "audioconvert");
> data.audiosink = gst_element_factory_make("autoaudiosink", "audiosink");
> data.videoconvert = gst_element_factory_make("videoconvert", "videoconvert");
> data.videosink = gst_element_factory_make("autovideosink", "videosink");
> 
> I connect the audioconvert to the audio sink, same for video.
> 
> and when the stream starts, I connect the source (uridecodebin) to the rest of the pipeline:
> 
> GstPad *audiosink_pad = gst_element_get_static_pad(data->audioconvert, "sink");
> GstPad *videosink_pad = gst_element_get_static_pad(data->videoconvert, "sink");
> 
> gst_pad_link (new_pad, audiosink_pad);
> 
> gst_pad_link (new_pad, videosink_pad);
> 
> 
> where "new_pad"  are the pads that are created, by source/uridecodebin when the stream starts.
> 
> 
> So I assume, in the C code I don't really have to use the names, since I can directly connect those elements, while in gst-launch one needs to  ID what elements go where?  right?
> 
> The gst-launch command I used seems to work,  I don't know if it is the most efficient way to do that though, but wondering how the elements should be linked in C code
> 
> 
> would it be something like?:
> 
> video:  uridecodebin -> queue -> theoraenc -> oggmux -> tcpserversink
> 
> audio: uridecodebin -> queue -> audioconvert -> audioresample -> flacenc -> tcpserversink
> 
> 
> In the reference manual I see that the tcpserversink element has 1 sink, so I need the element for both the audio stream and video stream?
> 
> (or do the two streams need to be combined before I  connect them to the tcpserversink element?)
> 
> 
> thanks,
> 
> 
> Ron
> 
> 
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20190517/cb4accf1/attachment-0001.html>


More information about the gstreamer-devel mailing list