need some low level GStreamer architecture help

Chuck Crisler ccrisler at mutualink.net
Fri Oct 9 12:33:52 PDT 2015


I have been chasing a difficult problem for many days and am *FINALLY*
closing in, but I don't understand how to proceed and really need some
architectural help. This is based on GStreamer 0.10.30 with the GStreamer
RTSP server 0.10.7.

In the big picture, we have a geographically dispersed network with nodes
that bring video into the network, nodes that can display that video and
other nodes that can route it outside. Traditionally the code used VLC for
all of the video functions, but now is a hybrid of VLC and GStreamer. Some
systems use MP2T and some use RTP when transmitting to the network. We
support analog, MP2T, RTP and RTSP video sources, many sizes, framerates,
bitrates, etc. Basically, my code has to handle almost anything on input
and output, performing all necessary translations, all on the fly. We also
support iPads, iPhone, Windows, Linux and Android devices for both input
and output.

I developed an initial application that does that well. Then I needed a
special program to serve as an RTSP server. Due to the input side
complexity, I based the input on the generic app and force that output to
RTP to feed into the RTSP server. One problem that I mentioned earlier is
that I don't know the sprop_parameter_set when the client connects, but I
have solved that one. My current problem deals with the generic translation
app. In this specific case I have RTP intput and RTP output (transmuxing,
not transcoding) and supplied via a socket to the RTSP server. While this
may seem wasteful and unnecessary, I don't know the input side MTU but the
output side is limited, so the transmux operation changes the MTU size. But
I could just as easily have to deal with MP2T on the input.

The pipeline is very simple, udpsrc -> rtph264depay -> rtph264pay ->
udpsink. The problem is that the rtp depayloader is running for a very long
time before anything is passed to the rtp payloader. By 'very long time' I
mean 2-5 seconds, building up 100,000 to 500,000 bytes. The payloader uses
the GstBuffer timestamp to generate the RTP timestamp, so every packet in
the (huge) payloader input buffer gets the same timestamp, which often
spans 200 - 500 video frames. Decoders don't like that.

For my current test case, I enabled logging on the h264depayloader,
basertpdepayloader, h264payloader and basertppayloader, all to level 5. At
2.04 seconds into the pipeline running, the h264depayloader pushed the
first newsegment event to the payloader, which initialized the caps and
received 191,306 bytes in a single buffer.

My problem is that I don't understand why the depayloader keeps processing
more input packets and pushing them into the adapter instead of pushing
them to the next element (the payloader). From what I have seen, it looks
like the 'decision' to push the buffer to the next element is due to
reasons outside of the depayloader (the udpsrc source pad calling the chain
function?), but I am really not sure about that. It would really help me
enormously if someone would explain the high level process of moving
buffers through the depayloader, including the pads, which I haven't even
looked at.

Thank you,
Chuck Crisler
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20151009/a0a6b97e/attachment.html>


More information about the gstreamer-devel mailing list