<!DOCTYPE html>
<html data-lt-installed="true">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body style="padding-bottom: 1px;">
After more investigation… The culprit seems to be videoencoder. When
the first input frame arrives, it has a timestamp of several
seconds, let's say x. videoencoder then sends a segment event with
the segment starting at 1000 hours minus x seconds. However, the
first output encoded frame has the timestamp 1000 hours. appsink
blocks itself for x seconds because of this. Where is logic in this?<br>
<br>
<div class="moz-cite-prefix">25.08.2023 14:37, Andrey Sotnikov
пишет:<br>
</div>
<blockquote type="cite"
cite="mid:d90a1794-6ca1-4c54-bc78-3f9b04a85160@gmail.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
I found out the cause of the delay, and it's very bizarre. The
delay is caused by appsink0 that was added to the media pipeline
by RTSPClient. I guess this AppSink sends RTP packets. When the
first data buffer arrives to appsink0, it has the timestamp equal
to 1000 hours. GstBaseSink::segment at the same time has a
timestamp of seemingly random few seconds earlier than 1000 hours.
appsink0 blocks the thread for this number of seconds. This blocks
gst_queue_lock. In turn, this blocks gst_base_src_loop, when the
latter calls gst_pad_peer_query.<br>
<br>
Can somebody explain the logic of what is going on and why?<br>
<br>
<div class="moz-cite-prefix">23.08.2023 23:10, Andrey Sotnikov
пишет:<br>
</div>
<blockquote type="cite"
cite="mid:760571c3-87bd-4785-a52d-a3bc54c2f0fb@gmail.com">
<meta http-equiv="content-type"
content="text/html; charset=UTF-8">
Hi, dear GStreamer community,<br>
<br>
I am tired of parsing GStreamer's source code to understand how
everything works and how to solve my problem.<br>
<br>
My company manufactures cameras. I am trying to create an
application that shares the data from these cameras over RTSP.
Here is the launch string for GstRTSPMediaFactory: "( appsrc
name=ourcamera ! queue ! x265enc speed-preset=5 tune=4
option-string=colormatrix=gbr:lossless=true ! rtph265pay
name=pay0 pt=96 )". When I receive a frame from my camera, I
push it to appsrc. The problem is, in the beginning, appsrc
buffers all the data pushed, while the connection is being
established. When the real data transfer starts, this buffer is
not being discarded, leading to a latency of dozens of seconds.
I would love to push only after RTSPClient is ready to pop data,
but how?<br>
<br>
I was trying to figure what was going on, and here are my
discoveries. When the pipeline created for an RTSPCLient is in
the play mode, gst_base_src_loop checks if reconfigure is
required. For some reason, it finds out it is, and calls
gst_base_src_negotiate_unlocked. The latter hangs on
gst_base_src_prepare_allocation. This function hangs on
gst_pad_query called on queue:sink and query is a new
allocation. It hangs only because, for some reason,
gst_queue_loop is not being called. While gst_queue_loop is
postponed, my application continues pushing data. When finally
gst_queue_loop is called, appsrc has a buffer of up to a hundred
frames accumulated. So what gst_queue_loop is waiting for?<br>
<br>
I tried removing gst_queue_loop all together, but not only the
latency problem remains, the pipeline reports it is not
configured properly and asks to add a queue.<br>
<lt-container></lt-container> </blockquote>
<br>
</blockquote>
<br>
</body>
</html>