building a gstreamer pipeline in C

R C cjvijf at gmail.com
Fri May 17 18:23:39 UTC 2019


I have been playing with that,  and that works.

(btw:  I want to write some c code, because at some point I want it to 
run as a deamon, and do some trickery with it, like switching cameras etc)


I see some "weird" msgs, don't know what they really mean, but the 
stream video/audio, seems to work.


this is what I see (I called the  little program basic-ipcam, 'a la' 
the  basic gstreamer API example. :

# ./basic-ipcam
DtsGetHWFeatures: Create File Failed
DtsGetHWFeatures: Create File Failed
Running DIL (3.22.0) Version
DtsDeviceOpen: Opening HW in mode 0
DtsDeviceOpen: Create File Failed
On 5/17/19 11:46 AM, Michael Gruner wrote:
> Hi
>
> There is no reason why a C app would have less lag/latency than a 
> gst-launch-1.0 pipeline (assuming the pipes are exactly the same). 
> After all, gst-launch-1.0 is a C application as well. Network 
> streaming will typically have more latency than local display, and 
> that’s the reason gst-launch-1.0 seems more laggy.
>
> Here’s a tip that may simplify your application development. Take a 
> look at gst_parse_launch, you can pass in the same line as in 
> gst-launch-1.0 and it will handle all the complexities automatically. 
> In fact, that’s what gst-launch-1.0 uses underneath.
>
> https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstParse.html#gst-parse-launch
>
> Roughly, it would look something like:
>
> /const gchar * description = "uridecodebin 
> uri=rtsp://192.168.x.y:554/user=admin_password=XXXXXXXX_channel=1_stream=0.sdp?real_stream 
> name=d ! queue ! theoraenc ! oggmux name=m ! tcpserversink 
> host=192.168.q.r port=8080 d. ! queue ! audioconvert ! audioresample ! 
> flacenc ! m."/
> /
> /
> /GError *error = NULL;/
> /GstElement * pipeline = gst_parse_launch (description, &error);/
> /
> /
> /If (!pipeline) {/
> /g_printerr (“Unable to create pipeline: %s\n”, error->message);/
> /g_error_free (error);/
> /}/
>
> Hope it helps!
>
> Michael
> www.ridgerun.com <http://www.ridgerun.com>
>
>
>> On May 17, 2019, at 10:49 AM, R C <cjvijf at gmail.com 
>> <mailto:cjvijf at gmail.com>> wrote:
>>
>> Hello all,
>>
>>
>> I am using the examples from the Gstreamer basic tutorial to see if I 
>> can build a c program that streams some IP cameras.
>>
>> The reason why I want to use C code is that is seems the ryn 
>> 'faster', less lag/latency.  I have a working example that streams 
>> the camera to a "gstreamer window" and the timestamps on the stream 
>> are only 3-4 secs behind, while running the same stream with the gst 
>> launch cmd line (into browser) is about 20 - 30 secs behind.
>>
>> (although I am streaming into a web page with gst-launch
>>
>>
>> This is the gst-launch pipeline I am using:
>>
>> gst-launch-1.0 uridecodebin 
>> uri=rtsp://192.168.x.y:554/user=admin_password=XXXXXXXX_channel=1_stream=0.sdp?real_stream 
>> name=d ! queue ! theoraenc ! oggmux name=m ! tcpserversink 
>> host=192.168.q.r port=8080 d. ! queue ! audioconvert ! audioresample 
>> ! flacenc ! m.
>>
>>
>> Being a rookie, using gstreamer, I assume that he names, d and m, are 
>> used to ID the video and audio stream?
>>
>>
>> I "adapted"  a gstreamer example a little and so far I can stream the 
>> video and audio to a gstreamer window., like (some excerpts) :
>>
>> // Create the elements
>> data.source = gst_element_factory_make("uridecodebin", "source");
>> data.audioconvert = gst_element_factory_make("audioconvert", 
>> "audioconvert");
>> data.audiosink = gst_element_factory_make("autoaudiosink", "audiosink");
>> data.videoconvert = gst_element_factory_make("videoconvert", 
>> "videoconvert");
>> data.videosink = gst_element_factory_make("autovideosink", "videosink");
>>
>> I connect the audioconvert to the audio sink, same for video.
>>
>> and when the stream starts, I connect the source (uridecodebin) to 
>> the rest of the pipeline:
>>
>> GstPad *audiosink_pad = 
>> gst_element_get_static_pad(data->audioconvert, "sink");
>> GstPad *videosink_pad = 
>> gst_element_get_static_pad(data->videoconvert, "sink");
>>
>> gst_pad_link (new_pad, audiosink_pad);
>>
>> gst_pad_link (new_pad, videosink_pad);
>>
>>
>> where "new_pad"  are the pads that are created, by 
>> source/uridecodebin when the stream starts.
>>
>>
>> So I assume, in the C code I don't really have to use the names, 
>> since I can directly connect those elements, while in gst-launch one 
>> needs to  ID what elements go where?  right?
>>
>> The gst-launch command I used seems to work,  I don't know if it is 
>> the most efficient way to do that though, but wondering how the 
>> elements should be linked in C code
>>
>>
>> would it be something like?:
>>
>> video:  uridecodebin -> queue -> theoraenc -> oggmux -> tcpserversink
>>
>> audio: uridecodebin -> queue -> audioconvert -> audioresample -> 
>> flacenc -> tcpserversink
>>
>>
>> In the reference manual I see that the tcpserversink element has 1 
>> sink, so I need the element for both the audio stream and video stream?
>>
>> (or do the two streams need to be combined before I connect them to 
>> the tcpserversink element?)
>>
>>
>> thanks,
>>
>>
>> Ron
>>
>>
>> _______________________________________________
>> gstreamer-devel mailing list
>> gstreamer-devel at lists.freedesktop.org 
>> <mailto:gstreamer-devel at lists.freedesktop.org>
>> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
>
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20190517/4ce07413/attachment-0001.html>


More information about the gstreamer-devel mailing list