[gst-devel] gst-python remuxer.py example race condition?
Peter Schwenke
peters at codian.com
Wed Apr 9 07:14:58 CEST 2008
Hi,
On my continued quest obtain a video segment from a file I found the
example remuxer.py in
gst0.10-python-0.10.11/examples
It basically does what I want.
However, I've found it very susceptible to timing. I have tried it on 3
different machines
- Quad core 2.66GHz running Ubuntu Gutsy (gstreamer 0.10.14)
- An Ubuntu Hardy VM virtual machine on the above quad core
- A Core Duo Ubuntu Gutsy laptop
I have found that I need to run it with extreme debugs on.
That is, one of GST_DEBUG or ogg equal to 5 i.e.
The same result occurs on each of the above machines
GST_DEBUG=5,ogg*:5 /usr/share/gst-python/0.10/examples/remuxer.py
otherwise I get
(remuxer.py:8162): GStreamer-CRITICAL **: gst_util_uint64_scale_int:
assertion `denom > 0' failed
(remuxer.py:8162): GStreamer-CRITICAL **: gst_util_uint64_scale_int:
assertion `denom > 0' failed
error <gst.Message GstMessageError, gerror=(GstGError)(NULL),
debug=(string)"gstoggdemux.c\(3096\):\ gst_ogg_demux_loop\ \(\):\
/__main__+remuxer0/bin2/oggdemux1:\012stream\ stopped\,\ reason\ error"
from oggdemux1 at 0x894ca60>
By changing the log levels I see ity occurs in the middle of this
:gst_mini_object_ref: 0x84021e8 ref 2->3
0:00:02.623757442 13320 0x849ff30 LOG GST_SCHEDULING
gstpad.c:3349:handle_pad_block:<theoraparse0:src> signal block taken
0:00:02.623770853 13320 0x849ff30 LOG GST_SCHEDULING
gstpad.c:3426:handle_pad_block:<theoraparse0:src> pad was flushing
0:00:02.623784542 13320 0x849ff30 LOG GST_REFCOUNTING
gstminiobject.c:351:gst_mini_object_unref: 0x84021e8 unref 3->2
0:00:02.623798511 13320 0x849ff30 DEBUG GST_PADS
gstpad.c:3701:gst_pad_push:<theoraparse0:src> pad block stopped by flush
0:00:02.623812201 13320 0x849ff30 LOG GST_REFCOUNTING
gstminiobject.c:306:gst_mini_object_ref: 0x8402148 ref 2->3
0:00:02.623826170 13320 0x849ff30 LOG GST_SCHEDULING
gstpad.c:3349:handle_pad_block:<theoraparse0:src> signal block taken
0:00:02.623839860 13320 0x849ff30 LOG GST_SCHEDULING
gstpad.c:3426:handle_pad_block:<theoraparse0:src> pad was flushing
0:00:02.623853270 13320 0x849ff30 LOG GST_REFCOUNTING
gstminiobject.c:351:gst_mini_object_unref: 0x8402148 unref 3->2
0:00:02.623867239 13320 0x849ff30 DEBUG GST_PADS
gstpad.c:3701:gst_pad_push:<theoraparse0:src> pad block stopped by flush
0:00:02.623880091 13320 0x849ff30 DEBUG theoraparse
theoraparse.c:590:theora_parse_drain_queue: draining queue of length 1
0:00:02.623894339 13320 0x849ff30 LOG GST_REFCOUNTING
gstcaps.c:373:gst_caps_ref: 0x82898c0 4->5
0:00:02.623908029 13320 0x849ff30 LOG GST_REFCOUNTING
gstcaps.c:396:gst_caps_unref: 0x84018e0 16->15
0:00:02.623921439 13320 0x849ff30 DEBUG theoraparse
theoraparse.c:507:theora_parse_push_buffer:<theoraparse0> pushing buffer
with granulepos 0|0
0:00:02.623936246 13320 0x849ff30 LOG GST_SCHEDULING
gstpad.c:3349:handle_pad_block:<theoraparse0:src> signal block taken
0:00:02.623949657 13320 0x849ff30 LOG GST_SCHEDULING
gstpad.c:3426:handle_pad_block:<theoraparse0:src> pad was flushing
0:00:02.623962787 13320 0x849ff30 LOG GST_REFCOUNTING
gstminiobject.c:351:gst_mini_object_unref: 0x8402378 unref 1->0
0:00:02.623976198 13320 0x849ff30 LOG GST_BUFFER
gstbuffer.c:186:gst_buffer_finalize: finalize 0x8402378
0:00:02.623989887 13320 0x849ff30 LOG GST_REFCOUNTING
gstcaps.c:396:gst_caps_unref: 0x82898c0 5->4
0:00:02.624005533 13320 0x849ff30 DEBUG GST_PADS
gstpad.c:3701:gst_pad_push:<theoraparse0:src> pad block stopped by flush
0:00:02.624020061 13320 0x849ff30 LOG GST_REFCOUNTING
gstobject.c:352:gst_object_unref:<theoraparse0> 0x83fc550 unref 3->2
0:00:02.624034588 13320 0x849ff30 LOG GST_SCHEDULING gstpad.c:3527:
I feel this might be due to the blocking/unblocking code in
set_connection_blocked_async_marshalled (remuxer.py)
There is a comment there about it being "racy"
It fails in the assertions in gst_util_uint64_scale_int()
(gstreamer0.10-0.10.18/gst/gstutils.c) called from
theora_parse_push_buffer()
(gst-plugins-base0.10-0.10.18/ext/theora/theoraparse.c)
which is called from theora_parse_drain_queue()
The code in remuxer.py looks like this
def set_connection_blocked_async_marshalled(pads, proc, *args, **kwargs):
def clear_list(l):
while l:
l.pop()
to_block = list(pads)
to_relink = [(x, x.get_peer()) for x in pads]
def on_pad_blocked_sync(pad, is_blocked):
if pad not in to_block:
# can happen after the seek and before unblocking -- racy,
# but no prob, bob.
return
to_block.remove(pad)
if not to_block:
# marshal to main thread
gobject.idle_add(on_pads_blocked)
def on_pads_blocked():
for src, sink in to_relink:
src.link(sink)
proc(*args, **kwargs)
for src, sink in to_relink:
src.set_blocked_async(False, lambda *x: None)
clear_list(to_relink)
for src, sink in to_relink:
src.unlink(sink)
src.set_blocked_async(True, on_pad_blocked_sync)
which is set up by the no-more-pads handler called when the demuxer pads
have been added.
def _do_seek(self):
flags = gst.SEEK_FLAG_FLUSH
# HACK: self.seek should work, should try that at some point
return self.demux.seek(1.0, gst.FORMAT_TIME, flags,
gst.SEEK_TYPE_SET, self.start_time,
gst.SEEK_TYPE_SET, self.stop_time)
def _no_more_pads(self, element):
pads = [x.get_pad('src') for x in self.parsers]
set_connection_blocked_async_marshalled(pads,
self._do_seek)
Am I on the right track and does anyone have an ideas on how to fix it?
Regards
...Peter
More information about the gstreamer-devel
mailing list