<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Hi<div class=""><br class=""></div><div class="">There is no reason why a C app would have less lag/latency than a gst-launch-1.0 pipeline (assuming the pipes are exactly the same). After all, gst-launch-1.0 is a C application as well. Network streaming will typically have more latency than local display, and that’s the reason gst-launch-1.0 seems more laggy.</div><div class=""><br class=""></div><div class="">Here’s a tip that may simplify your application development. Take a look at gst_parse_launch, you can pass in the same line as in gst-launch-1.0 and it will handle all the complexities automatically. In fact, that’s what gst-launch-1.0 uses underneath.</div><div class=""><br class=""></div><div class=""><a href="https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstParse.html#gst-parse-launch" class="">https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstParse.html#gst-parse-launch</a></div><div class=""><br class=""></div><div class="">Roughly, it would look something like:</div><div class=""><br class=""></div><div class=""><i class="">const gchar * description = "uridecodebin uri=<a href="rtsp://192.168.x.y:554/user=admin_password=XXXXXXXX_channel=1_stream=0.sdp?real_stream" class="">rtsp://192.168.x.y:554/user=admin_password=XXXXXXXX_channel=1_stream=0.sdp?real_stream</a> name=d ! queue ! theoraenc ! oggmux name=m ! tcpserversink host=192.168.q.r port=8080 d. ! queue ! audioconvert ! audioresample ! flacenc ! m."</i></div><div class=""><i class=""><br class=""></i></div><div class=""><i class="">GError *error = NULL;</i></div><div class=""><i class="">GstElement * pipeline = gst_parse_launch (description, &error);</i></div><div class=""><i class=""><br class=""></i></div><div class=""><i class="">If (!pipeline) {</i></div><div class=""><i class=""><span class="Apple-tab-span" style="white-space:pre"> </span>g_printerr (“Unable to create pipeline: %s\n”, error->message);</i></div><div class=""><i class=""><span class="Apple-tab-span" style="white-space:pre"> </span>g_error_free (error);</i></div><div class=""><i class="">}</i></div><div class=""><br class=""></div><div class="">Hope it helps!</div><div class=""><br class=""></div><div class="">Michael</div><div class=""><a href="http://www.ridgerun.com" class="">www.ridgerun.com</a></div><div class=""><br class=""></div><div class=""><div><br class=""><blockquote type="cite" class=""><div class="">On May 17, 2019, at 10:49 AM, R C <<a href="mailto:cjvijf@gmail.com" class="">cjvijf@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Hello all,<br class=""><br class=""><br class="">I am using the examples from the Gstreamer basic tutorial to see if I can build a c program that streams some IP cameras.<br class=""><br class="">The reason why I want to use C code is that is seems the ryn 'faster', less lag/latency. I have a working example that streams the camera to a "gstreamer window" and the timestamps on the stream are only 3-4 secs behind, while running the same stream with the gst launch cmd line (into browser) is about 20 - 30 secs behind.<br class=""><br class="">(although I am streaming into a web page with gst-launch<br class=""><br class=""><br class="">This is the gst-launch pipeline I am using:<br class=""><br class="">gst-launch-1.0 uridecodebin uri=<a href="rtsp://192.168.x.y:554/user=admin_password=XXXXXXXX_channel=1_stream=0.sdp?real_stream" class="">rtsp://192.168.x.y:554/user=admin_password=XXXXXXXX_channel=1_stream=0.sdp?real_stream</a> name=d ! queue ! theoraenc ! oggmux name=m ! tcpserversink host=192.168.q.r port=8080 d. ! queue ! audioconvert ! audioresample ! flacenc ! m.<br class=""><br class=""><br class="">Being a rookie, using gstreamer, I assume that he names, d and m, are used to ID the video and audio stream?<br class=""><br class=""><br class="">I "adapted" a gstreamer example a little and so far I can stream the video and audio to a gstreamer window., like (some excerpts) :<br class=""><br class="">// Create the elements<br class="">data.source = gst_element_factory_make("uridecodebin", "source");<br class="">data.audioconvert = gst_element_factory_make("audioconvert", "audioconvert");<br class="">data.audiosink = gst_element_factory_make("autoaudiosink", "audiosink");<br class="">data.videoconvert = gst_element_factory_make("videoconvert", "videoconvert");<br class="">data.videosink = gst_element_factory_make("autovideosink", "videosink");<br class=""><br class="">I connect the audioconvert to the audio sink, same for video.<br class=""><br class="">and when the stream starts, I connect the source (uridecodebin) to the rest of the pipeline:<br class=""><br class="">GstPad *audiosink_pad = gst_element_get_static_pad(data->audioconvert, "sink");<br class="">GstPad *videosink_pad = gst_element_get_static_pad(data->videoconvert, "sink");<br class=""><br class="">gst_pad_link (new_pad, audiosink_pad);<br class=""><br class="">gst_pad_link (new_pad, videosink_pad);<br class=""><br class=""><br class="">where "new_pad" are the pads that are created, by source/uridecodebin when the stream starts.<br class=""><br class=""><br class="">So I assume, in the C code I don't really have to use the names, since I can directly connect those elements, while in gst-launch one needs to ID what elements go where? right?<br class=""><br class="">The gst-launch command I used seems to work, I don't know if it is the most efficient way to do that though, but wondering how the elements should be linked in C code<br class=""><br class=""><br class="">would it be something like?:<br class=""><br class="">video: uridecodebin -> queue -> theoraenc -> oggmux -> tcpserversink<br class=""><br class="">audio: uridecodebin -> queue -> audioconvert -> audioresample -> flacenc -> tcpserversink<br class=""><br class=""><br class="">In the reference manual I see that the tcpserversink element has 1 sink, so I need the element for both the audio stream and video stream?<br class=""><br class="">(or do the two streams need to be combined before I connect them to the tcpserversink element?)<br class=""><br class=""><br class="">thanks,<br class=""><br class=""><br class="">Ron<br class=""><br class=""><br class="">_______________________________________________<br class="">gstreamer-devel mailing list<br class=""><a href="mailto:gstreamer-devel@lists.freedesktop.org" class="">gstreamer-devel@lists.freedesktop.org</a><br class="">https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel</div></div></blockquote></div><br class=""></div></body></html>