<div dir="ltr">I have this nearly working now (see below for the remaining puzzle) and wanted to send my thanks to Sebastian, Tim, and Pedro for helping to steer me in the right direction. I really appreciate your guidance.<div>
<br></div><div>Here's my setup. It's similar to Pedro's.</div><div><br></div><div>videotestsrc is-live=true ! capsfilter ! videorate ! videoconvert ! x264enc ! queue leaky=downstream max-size-bytes=500MB max-size-buffers=0 max-size-time=0 ! bin, where bin is disposed and recreated for each request and contains mp4mux ! filesink.</div>
<div><br></div><div>I've got this rigged up inside a server that listens on a socket for incoming requests which identify a desired time-based segment of the video stream. In the quiescent state, there's a blocking probe on the leaky queue's source pad, so data flows all the way into that queue and then, once the queue is full, old data is dropped.</div>
<div><br></div><div>When a request comes in, I install a new (non-blocking) probe and remove the existing blocking probe. Data starts to flow through the leaky queue, and the new probe's callback inspects the PTS on each video frame, waiting first for a keyframe that's within the requested window (at which point it stops dropping frames and instead starts passing them to the mux and sink), and then for a video frame that is beyond the requested window, at which point it sends an EOS through the bin to finalize the file; when the EOS appears on the application bus, the app removes the non-blocking probe, re-instates the blocking probe, NULLs the bin, removes the bin, and then sends the result back to the client through the socket and awaits the next request.</div>
<div><br></div><div>All of this works like a charm, EXCEPT for the following observed behavior: when I reinstate the blocking probe on the queue's source pad, if there is any data in the queue, data stops flowing into the queue. Indeed, the whole pipeline goes eerily quiet. If, however, the request ends up draining the queue completely, when I reinstate the blocking probe data continues to flow into and build up inside the queue.</div>
<div><br></div><div>I'm about to dig into the queue code to see if I can understand why that might be happening, but I thought I would ping the experts first to see if this rings a bell.</div><div><br></div><div>-Todd</div>
<div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Wed, Jan 1, 2014 at 1:39 PM, Todd Agulnick <span dir="ltr"><<a href="mailto:todd@agulnick.com" target="_blank">todd@agulnick.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote"><div class="im">On Wed, Jan 1, 2014 at 4:23 AM, Sebastian Dröge <span dir="ltr"><<a href="mailto:sebastian@centricular.com" target="_blank">sebastian@centricular.com</a>></span> wrote:<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div>
<br>
</div>You should be able to drop the message from the sync bus handler of the<br>
bin too, to prevent it going up in the pipeline hierarchy.<br></blockquote><div><br></div></div><div>Just to follow up with a conversation that took place on IRC just now:</div><div><br></div><div>You can't do this because the GstBin already has a sync bus handler, and there can be only one. We talked about possible modifications to GstBin to support the desired behavior (bug filed here: <a href="https://bugzilla.gnome.org/show_bug.cgi?id=721310" target="_blank">https://bugzilla.gnome.org/show_bug.cgi?id=721310</a>), but for now as a work-around we're going to catch the EOS just upstream of the filesink to see if that works.</div>
<div> <br></div></div></div></div>
</blockquote></div><br></div>