building a gstreamer pipeline in C

R C cjvijf at gmail.com
Fri May 17 16:49:50 UTC 2019


Hello all,


I am using the examples from the Gstreamer basic tutorial to see if I 
can build a c program that streams  some IP cameras.

The reason why I want to use C code is that is seems the ryn 'faster', 
less lag/latency.  I have a working example that streams the camera to a 
"gstreamer window" and the timestamps on the stream are only 3-4 secs 
behind, while running the same stream with the gst launch cmd line (into 
browser) is about 20 - 30 secs behind.

(although I am streaming into a web page with gst-launch


This is the gst-launch pipeline I am using:

gst-launch-1.0 uridecodebin 
uri=rtsp://192.168.x.y:554/user=admin_password=XXXXXXXX_channel=1_stream=0.sdp?real_stream 
name=d ! queue ! theoraenc ! oggmux name=m ! tcpserversink 
host=192.168.q.r port=8080 d. ! queue ! audioconvert ! audioresample ! 
flacenc ! m.


Being a rookie, using gstreamer, I assume that he names, d and m, are 
used to ID the video and audio stream?


I "adapted"  a gstreamer example a little and so far I can stream the 
video and audio to a gstreamer window., like (some excerpts) :

// Create the elements
data.source = gst_element_factory_make("uridecodebin", "source");
data.audioconvert = gst_element_factory_make("audioconvert", 
"audioconvert");
data.audiosink = gst_element_factory_make("autoaudiosink", "audiosink");
data.videoconvert = gst_element_factory_make("videoconvert", 
"videoconvert");
data.videosink = gst_element_factory_make("autovideosink", "videosink");

I connect the audioconvert to the audio sink, same for video.

and when the stream starts, I connect the source (uridecodebin) to the 
rest of the pipeline:

GstPad *audiosink_pad = gst_element_get_static_pad(data->audioconvert, 
"sink");
GstPad *videosink_pad = gst_element_get_static_pad(data->videoconvert, 
"sink");

gst_pad_link (new_pad, audiosink_pad);

gst_pad_link (new_pad, videosink_pad);


where "new_pad"  are the pads that are created, by source/uridecodebin 
when the stream starts.


So I assume, in the C code I don't really have to use the names, since I 
can directly connect those elements, while in gst-launch one needs to  
ID what elements go where?  right?

The gst-launch command I used seems to work,  I don't know if it is the 
most efficient way to do that though, but wondering how the elements 
should be linked in C code


would it be something like?:

video:  uridecodebin -> queue -> theoraenc -> oggmux -> tcpserversink

audio: uridecodebin -> queue -> audioconvert -> audioresample -> flacenc 
-> tcpserversink


In the reference manual I see that the tcpserversink element has 1 sink, 
so I need the element for both the audio stream and video stream?

(or do the two streams need to be combined before I  connect them to the 
tcpserversink element?)


thanks,


Ron




More information about the gstreamer-devel mailing list