need some low level GStreamer architecture help
ccrisler at mutualink.net
Fri Oct 9 14:35:52 PDT 2015
OK, I have found the problem and it is something that I (kind of)
introduced by fixing a different bug in the depayloader.
I have been working with many different video sources. One in particular
has a SPS that ends with 0x00. There is (really bad) code in GStreamer that
considers 4 byte start code prefixes and tries to remove incorrect 0x00
sequences. Well, it was truncating my SPS and causing it to be invalid. I
only found it because I was adding code to generate the sprop_parameter_set
and noticed that the SPS was 1 byte short sometimes. So, I modified that
code to not do that, which caused this new problem that I have been
chasing. Specifically, in the rtph264depay source there is a function
gst_rtp_h264_depay_push_nal() that checks for the type of the NAL passed
in. Somebody *HARDCODED* the array index without using a #define SYMBOL
assuming a 4 byte start code prefix. So, when I fixed the previous code and
forced a 3 byte start code prefix, this function started failing to
properly identify the NAL type, causing all kinds of problems.
On Fri, Oct 9, 2015 at 3:33 PM, Chuck Crisler <ccrisler at mutualink.net>
> I have been chasing a difficult problem for many days and am *FINALLY*
> closing in, but I don't understand how to proceed and really need some
> architectural help. This is based on GStreamer 0.10.30 with the GStreamer
> RTSP server 0.10.7.
> In the big picture, we have a geographically dispersed network with nodes
> that bring video into the network, nodes that can display that video and
> other nodes that can route it outside. Traditionally the code used VLC for
> all of the video functions, but now is a hybrid of VLC and GStreamer. Some
> systems use MP2T and some use RTP when transmitting to the network. We
> support analog, MP2T, RTP and RTSP video sources, many sizes, framerates,
> bitrates, etc. Basically, my code has to handle almost anything on input
> and output, performing all necessary translations, all on the fly. We also
> support iPads, iPhone, Windows, Linux and Android devices for both input
> and output.
> I developed an initial application that does that well. Then I needed a
> special program to serve as an RTSP server. Due to the input side
> complexity, I based the input on the generic app and force that output to
> RTP to feed into the RTSP server. One problem that I mentioned earlier is
> that I don't know the sprop_parameter_set when the client connects, but I
> have solved that one. My current problem deals with the generic translation
> app. In this specific case I have RTP intput and RTP output (transmuxing,
> not transcoding) and supplied via a socket to the RTSP server. While this
> may seem wasteful and unnecessary, I don't know the input side MTU but the
> output side is limited, so the transmux operation changes the MTU size. But
> I could just as easily have to deal with MP2T on the input.
> The pipeline is very simple, udpsrc -> rtph264depay -> rtph264pay ->
> udpsink. The problem is that the rtp depayloader is running for a very long
> time before anything is passed to the rtp payloader. By 'very long time' I
> mean 2-5 seconds, building up 100,000 to 500,000 bytes. The payloader uses
> the GstBuffer timestamp to generate the RTP timestamp, so every packet in
> the (huge) payloader input buffer gets the same timestamp, which often
> spans 200 - 500 video frames. Decoders don't like that.
> For my current test case, I enabled logging on the h264depayloader,
> basertpdepayloader, h264payloader and basertppayloader, all to level 5. At
> 2.04 seconds into the pipeline running, the h264depayloader pushed the
> first newsegment event to the payloader, which initialized the caps and
> received 191,306 bytes in a single buffer.
> My problem is that I don't understand why the depayloader keeps processing
> more input packets and pushing them into the adapter instead of pushing
> them to the next element (the payloader). From what I have seen, it looks
> like the 'decision' to push the buffer to the next element is due to
> reasons outside of the depayloader (the udpsrc source pad calling the chain
> function?), but I am really not sure about that. It would really help me
> enormously if someone would explain the high level process of moving
> buffers through the depayloader, including the pads, which I haven't even
> looked at.
> Thank you,
> Chuck Crisler
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the gstreamer-devel