<div dir="auto"><div>Thanks for your input, unfortunately a queue after udpsrc didn't improve it. Even if my pipeline is just</div><div dir="auto">udpsrc ! rtpjitterbuf ! fakesink</div><div dir="auto">there are still packet drops in the first second.</div><div dir="auto"> I just can't seem to find out why the udpsrc stops handling packets while the pipeline negotiates itself. It's not a problem for streaming, but for benchmarks it's quite annoying. <div dir="auto"><br></div><div dir="auto">Does anyone have a hint on how I can profile the pipeline to see where it spends time and specifically for what the udpsrc thread waits in the beginning?</div><div dir="auto"><br></div><div dir="auto">@Nicolas</div><div dir="auto">Having the udpsrc push a buffer list instead of single buffers sounds interesting. Seems to me that would be an improvement. </div><div dir="auto"><br></div><div dir="auto">Best Regards</div><div dir="auto"><br></div><div dir="auto">Christoph</div><br><br><div class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">Nicolas Dufresne <<a href="mailto:nicolas@ndufresne.ca">nicolas@ndufresne.ca</a>> schrieb am Sa., 5. Okt. 2019, 20:33:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Le samedi 05 octobre 2019 à 18:14 +0200, Christoph Eifert a écrit :<br>
> I've managed to narrow it down now. The reason for the packet drops is actually the startup time of the client pipeline. So packets are only dropped at the beginning. Especially the decodebin and fakesink apparently take their time negotiating stuff. And the udpsrc is not handling packets while that happens. Even though the udpsrc is running in it's own thread and the client application is started before the server, it still seems to stop handling packets for a while after receiving the first and the pipeline organizes itself (with decodebin creating a src pad and connecting with the sink).<br>
> Is there anyway to force the udpsrc to keep handling packets and buffering them in the queue while that happens?<br>
<br>
Add a queue right after udpsink, but you have rtpjitterbuffer which in<br>
theory should have the same behaviour.<br>
<br>
One of the main issue with udpsrc is that it retrieves a single packet<br>
and push it. This isn't great, ideally it should try and empty the<br>
socket and push a buffer list instead. That will increase performance<br>
and reduce packet lost on received buffer. I have plans in that sense,<br>
feel free to give it a look if you have time.<br>
<br>
Nicolas<br>
<br>
> <br>
> Best regards<br>
> <br>
> Christoph<br>
> <br>
> On Sun, Sep 29, 2019 at 10:58 PM Christoph Eifert <<a href="mailto:eifert.christoph@gmail.com" target="_blank" rel="noreferrer">eifert.christoph@gmail.com</a>> wrote:<br>
> > Hi,<br>
> > I'm facing a hopefully simple problem, where it seems as if the udpsrc element is far too slow in handling packets. <br>
> > I have a small test application to send and receive a rtp stream. The problem is that I'm losing a lot of packets on the receiver side, even when just using localhost.<br>
> > The pipeline basically looks like this:<br>
> > gst-launch-1.0 filesrc location=file.mp4 ! qtdemux ! video/x-h264 ! rtph264pay ! udpsink host=127.0.0.1 port=5000<br>
> > and<br>
> > gst-launch-1.0 -v udpsrc port=5000 caps='application/x-rtp, media=video, etc.' ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! autovideosink<br>
> > <br>
> > The "bytes-served" property of udpsink confirms that all bytes of the source file have been sent.<br>
> > On the receiving side, rtpjitterbuffer "stats" property tells me that about 1000 out of 8500 packets have been lost using my 10Mb test video.<br>
> > <br>
> > If I increase the kernel receive buffer size with net.core.rmem_max and net.core.rmem_default, it works. But I need it to work without changing kernel values.<br>
> > The bitrate is just 8 MBit/sec, which means there are less than 1000 packets per second. And on a 3Ghz cpu I should be able to handle a lot more than 1000 packets per second, especially on localhost, so it should work just fine with the default kernel values.<br>
> > (Same happens over a local network between two ubuntu machines and with both the basic pipeline using gst-launch and my application.)<br>
> > <br>
> > So where could the bottleneck / my faulty reasoning be?<br>
> > <br>
> > Any hints appreciated.<br>
> > <br>
> > Best regards<br>
> > <br>
> > Christoph<br>
> <br>
> _______________________________________________<br>
> gstreamer-devel mailing list<br>
> <a href="mailto:gstreamer-devel@lists.freedesktop.org" target="_blank" rel="noreferrer">gstreamer-devel@lists.freedesktop.org</a><br>
> <a href="https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel" rel="noreferrer noreferrer" target="_blank">https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel</a><br>
<br>
_______________________________________________<br>
gstreamer-devel mailing list<br>
<a href="mailto:gstreamer-devel@lists.freedesktop.org" target="_blank" rel="noreferrer">gstreamer-devel@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel" rel="noreferrer noreferrer" target="_blank">https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel</a></blockquote></div></div></div>