a/v freeze while flushing audio chain of pipeline in running state (clock and renderer sync related)
Nitin Mahajan
rise.era at gmail.com
Thu Jun 8 11:02:45 UTC 2017
it's blocked on ret = gst_clock_id_wait (sink->priv->cached_clock_id,
jitter);
/* A blocking wait is performed on the clock. We save the ClockID
* so we can unlock the entry at any time. While we are blocking, we
* release the PREROLL_LOCK so that other threads can interrupt the
* entry. */
...
res = cclass->wait (clock, entry, jitter);
...
what could cause the GstSystemClock to wait forever inside:
/* now wait on the entry, it either times out or the fd is written.
The
* status of the entry is only BUSY around the poll. */
pollret = gst_poll_wait (sysclock->priv->timer, diff);
This part is not clear especially clocks and synchronisation management.
Note that the clock ID displays name as "GstPulseSinkClock". This is not
clear.
Is it like Gstreamer system clock is linked with PulseSink ?
We also noticed earlier that there is one thread
Thread
#0 0xb7fdd428 in __kernel_vsyscall ()
#1 0xb7ab501b in poll () at ../sysdeps/unix/syscall-template.S:81
#2 0xac6e5bbd in poll (__timeout=1500, __nfds=2, __fds=0xa6c02d68) at
/usr/include/i386-linux-gnu/bits/poll2.h:46
#3 poll_func (ufds=0xa6c02d68, nfds=2, timeout=1500, userdata=0xac70b508)
at pulse/thread-mainloop.c:69
#4 0xac6d44ee in pa_mainloop_poll (m=m at entry=0xac70b460) at
pulse/mainloop.c:850
#5 0xac6d4d1d in pa_mainloop_iterate (m=m at entry=0xac70b460,
block=block at entry=1, retval=retval at entry=0x0) at pulse/mainloop.c:932
#6 0xac6d4df4 in pa_mainloop_run (m=0xac70b460, retval=retval at entry=0x0)
at pulse/mainloop.c:950
#7 0xac6e5b5c in thread (userdata=0xac705660) at pulse/thread-mainloop.c:88
#8 0xac68e4dd in internal_thread_func (userdata=0xac70b598) at
pulsecore/thread-posix.c:83
#9 0xb7b8df72 in start_thread (arg=0xa753eb40) at pthread_create.c:312
#10 0xb7ac3f8e in clone () at ../sysdeps/unix/sysv/linux/i386/clone.S:129
Are they related or any clue to establish an understanding that if audio
chain is empty of buffers then the video sink will wait forever ?
Any link or suggestion to build this understanding and analysis will be
very helpful.
Thanks,
--Nitin
On Thu, Jun 8, 2017 at 2:18 PM, Nitin Mahajan <rise.era at gmail.com> wrote:
> Missed to mention that the logs are captured with gst-validate-1.0
> application.
> The scenario is switching track switch every 5seconds.
> the adaptive demux plugin outputs video and audio ES.
> The sequence of operations done on adaptive demux plugin are as follows:
> On audio src pad of demux:
> Flush buffered audio packets:
> - push gst_event_new_flush_start()
> - push gst_event_new_flush_stop(FALSE)
> - push new segment event
> - push buffers with the time stamps corresponding to the current
> presentation time
> It works 2-3 time but then freezes.
> --Nitin
>
> On Thu, Jun 8, 2017 at 2:13 PM, Nitin Mahajan <rise.era at gmail.com> wrote:
>
>> Many Thanks. Attached basesink debug out captured when a/v freeze
>> (freeze_basesink_traces.txt.zip)
>> Also captured some info for the blocking thread below.
>> While analysing various threads, it seems that audio queue is empty post
>> flushing the audio chain (upon track change) and the video queue is full.
>> The thread to push next video packet is blocked on multiqueue waiting for
>> the space to be available to enqueue but video renderer is blocked.
>> Due to this, complete video pipeline is full and buffers not being
>> consumed and the thread responsible to push audio packets from the demux in
>> pipeline is blocked.
>> Could it be the possible reason that audio queue is empty post flush and
>> video queue is full, so basesink will wait forever triggering a deadlock ..
>> Also, may request for ant reference implementation where on the fly flush
>> start and stop is done on one of the decode chain (like audio decode chain
>> in this case)..
>>
>> (gdb) up
>> #4 0xb7d8d76d in gst_system_clock_id_wait_jitter_unlocked
>> (clock=0xad1672e0, entry=0xac732140, jitter=0xa09fe650, restart=1) at
>> gstsystemclock.c:684
>> 684 pollret = gst_poll_wait (sysclock->priv->timer, diff);
>> (gdb) p diff
>> $1 = 16433000
>> (gdb) up
>> #5 0xb7d3c1aa in gst_clock_id_wait (id=0xac732140, jitter=jitter at entry
>> =0xa09fe650) at gstclock.c:539
>> 539 res = cclass->wait (clock, entry, jitter);
>> (gdb) p jitter
>> $2 = (GstClockTimeDiff *) 0xa09fe650
>> (gdb) p *jitter
>> $3 = -16433000
>> (gdb) p *clock
>> $4 = {object = {object = {g_type_instance = {g_class = 0xac708098},
>> ref_count = 46, qdata = 0x0}, lock = {p = 0xac705a28, i = {2893044264, 0}},
>> name = 0xac705a68 "GstPulseSinkClock", parent = 0x0,
>> flags = 752, control_bindings = 0x0, control_rate = 100000000,
>> last_sync = 18446744073709551615, _gst_reserved = 0x0}, priv = 0xad167278,
>> _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
>> (gdb) p *entry
>> $6 = {refcount = 1, clock = 0xad1672e0, type = GST_CLOCK_ENTRY_SINGLE,
>> time = 25083000000, interval = 18446744073709551615, status =
>> GST_CLOCK_BUSY, func = 0x0, user_data = 0x0, destroy_data = 0x0,
>> unscheduled = 0, woken_up = 0, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
>> (gdb)
>> $7 = {refcount = 1, clock = 0xad1672e0, type = GST_CLOCK_ENTRY_SINGLE,
>> time = 25083000000, interval = 18446744073709551615, status =
>> GST_CLOCK_BUSY, func = 0x0, user_data = 0x0, destroy_data = 0x0,
>> unscheduled = 0, woken_up = 0, _gst_reserved = {0x0, 0x0, 0x0, 0x0}}
>> (gdb) up
>> #6 0xb77f1930 in gst_base_sink_wait_clock (sink=sink at entry=0xad30c6d8,
>> time=<optimized out>, jitter=jitter at entry=0xa09fe650) at
>> gstbasesink.c:2104
>> 2104 ret = gst_clock_id_wait (sink->priv->cached_clock_id, jitter);
>> (gdb) p jitter
>> $8 = (GstClockTimeDiff *) 0xa09fe650
>> (gdb) p *jitter
>> $9 = -16433000
>> (gdb) up
>> #7 0xb77f3317 in gst_base_sink_do_sync (basesink=basesink at entry=0xad30c6d8,
>> obj=obj at entry=0xa0018220, late=late at entry=0xa09fe7d8,
>> step_end=step_end at entry=0xa09fe7dc) at gstbasesink.c:2491
>> 2491 status = gst_base_sink_wait_clock (basesink, stime, &jitter);
>> (gdb) p stime
>> $10 = <optimized out>
>> (gdb) p jitter
>> $11 = -16433000
>> (gdb) p *basesink
>> $12 = {element = {object = {object = {g_type_instance = {g_class =
>> 0xad30b220}, ref_count = 4, qdata = 0xac7763b8}, lock = {p = 0xad30c8b8, i
>> = {2905655480, 0}}, name = 0xad32f5f0 "irdetovideosink0",
>> parent = 0xac72b048, flags = 32, control_bindings = 0x0,
>> control_rate = 100000000, last_sync = 18446744073709551615, _gst_reserved =
>> 0x0}, state_lock = {p = 0xad30c8d8, i = {0, 0}}, state_cond = {
>> p = 0xad30c8f8, i = {0, 0}}, state_cookie = 3, target_state =
>> GST_STATE_PLAYING, current_state = GST_STATE_PLAYING, next_state =
>> GST_STATE_VOID_PENDING, pending_state = GST_STATE_VOID_PENDING,
>> last_return = GST_STATE_CHANGE_SUCCESS, bus = 0xac706298, clock =
>> 0xad1672e0, base_time = 0, start_time = 0, numpads = 1, pads = 0xadd5c5c0,
>> numsrcpads = 0, srcpads = 0x0, numsinkpads = 1,
>> sinkpads = 0xadd5c5d0, pads_cookie = 1, _gst_reserved = {0x0, 0x0,
>> 0x0, 0x0}}, sinkpad = 0xad1664f8, pad_mode = GST_PAD_MODE_PUSH, offset = 0,
>> can_activate_pull = 0, can_activate_push = 1,
>> preroll_lock = {p = 0xad30ca78, i = {2905655928, 0}}, preroll_cond = {p
>> = 0xad30ca98, i = {0, 0}}, eos = 0, need_preroll = 0, have_preroll = 0,
>> playing_async = 0, have_newsegment = 1, segment = {
>> flags = GST_SEGMENT_FLAG_NONE, rate = 1, applied_rate = 1, format =
>> GST_FORMAT_TIME, base = 0, offset = 0, start = 0, stop =
>> 18446744073709551615, time = 0, position = 25124666592,
>> duration = 18446744073709551615, _gst_reserved = {0x0, 0x0, 0x0,
>> 0x0}}, clock_id = 0xac732140, sync = 1, flushing = 0, running = 0,
>> max_lateness = 20000000, priv = 0xad30c500, _gst_reserved = {
>> 0x0 <repeats 20 times>}}
>> (gdb)
>>
>> --Nitin
>>
>>
>> On Thu, Jun 8, 2017 at 12:43 PM, Arun Raghavan <arun at arunraghavan.net>
>> wrote:
>>
>>> On Thu, 1 Jun 2017, at 10:40 PM, Nitin Mahajan wrote:
>>> > Please note that flush stop is passed with False so that the clock
>>> timing
>>> > is not reset purposely as thereafter buffer with valid timestamp of
>>> > different language is pushed. What could trigger queue thread to block
>>> /
>>> > wait inside video renderer basesink from audio sink pulse audio. The
>>> test
>>> > is running on x86 with gstreamer 1.4.5. Suggestion/pointer will be very
>>> > helpful as there is not much clue from traces to establish design
>>> > relationship between audio and video sinks and blocking wait.
>>>
>>> It might be a good idea to look at the basesink debug output -- what is
>>> the timestamp that you're pushing out, and what is the timestamp against
>>> which that the sink is waiting?
>>>
>>> -- Arun
>>> _______________________________________________
>>> gstreamer-devel mailing list
>>> gstreamer-devel at lists.freedesktop.org
>>> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20170608/e9eae75b/attachment-0001.html>
More information about the gstreamer-devel
mailing list