<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Hi Nicolas,<br>
<br>
Thanks for Reply.<br>
<br>
When I am getting too much audio discontinuity timestamps
messages at that time Audio data is lagged compare to video data
and goes out of sync for that duration. So, if I set threshold
timestamp to 40ms then will my sync problem be solved or reduced?<br>
<br>
Please let me know if any other configuration require in
appsrc pipeline to solve this problem.<br>
<pre class="moz-signature" cols="72">Regards,
Ritesh Prajapati,
System Level Solutions (India) Pvt.Ltd.
</pre>
On Thursday 07 July 2016 06:45 PM, Nicolas Dufresne wrote:<br>
</div>
<blockquote cite="mid:1467897327.7116.5.camel@gmail.com" type="cite">
<pre wrap="">See reply inline.
Le jeudi 07 juillet 2016 à 14:12 +0530, Ritesh Prajapati a écrit :
</pre>
<blockquote type="cite">
<pre wrap="">Hi All,
I am working on one of our product in which android 4.4 Kitkat is
running. We have one application called it as SPICE client which runs
on our product and is used to capture Audio+Video data streamed over
network from SPICE server which is installed into Ubuntu 16.04
Server.
We are using SPICE client gtk based topology in which Gstreamer
Framework 1.0 is used.
We are facing one audio sync issue like when Streaming process
of Audio+Video are started from SPICE server side at that time we are
getting video data perfectly but not getting audio data in sync
compared to video frames and are dropped for some initial durations.
so, it seems like Audio data is not synced or is lagged compare to
video data at that time.
Please find following code snippet of Gstreamer Pipeline we have
configured and used in our SPICE client android code.
</pre>
<blockquote type="cite">
<pre wrap="">#ifdef WITH_GST1AUDIO
g_strdup_printf("audio/x-
raw,format=\"S16LE\",channels=%d,rate=%d,"
"layout=interleaved", channels,
frequency);
#else
g_strdup_printf("audio/x-raw-
int,channels=%d,rate=%d,signed=(boolean)true,"
"width=16,depth=16,endianness=1234",
channels, frequency);
#endif
gchar *pipeline = g_strdup
(g_getenv("SPICE_GST_AUDIOSINK"));
if (pipeline == NULL)
pipeline = g_strdup_printf("appsrc is-live=1 do-
timestamp=1 format=time min-latency=0 caps=\"%s\" name=\"appsrc\" !
"
"audioconvert !
audioresample ! autoaudiosink name=\"audiosink\"", audio_caps);
</pre>
</blockquote>
<pre wrap="">
Also, we are getting below warning message sometimes from Android
Gstreamer Studio.
gstaudiobasesink.c:1807:gst_audio_base_sink_get_alignment:<audiosink-
actual-sink-opensles> Unexpected discontinuity in audio timestamps of
-0:00:00.076145124, resyncing
</pre>
</blockquote>
<pre wrap="">
As audio may arrive in small burst, what will happen is that the
timestamp distance between buffer becomes too small. The timestamp +
duration of the current buffer may endup after the next timestamp. In
this case the audio sink will resync. To prevent that, you should
timestamp the buffer yourself, so you pick an initial timestamp, and
add the duration. You then monitor the calculated timestamp and the
time now, if it drift over a certain threshold (generally 40ms), you
resync (and set the discontinuity flag).
In an ideal world, the streaming protocol should provide you with
timing information that let you correlate in time the video and the
audio frames. This should serve in creating timestamps and ensuring
perfect A/V sync. I don't know Spice too much, but it might not be part
of the protocol.
Nicolas</pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
gstreamer-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:gstreamer-devel@lists.freedesktop.org">gstreamer-devel@lists.freedesktop.org</a>
<a class="moz-txt-link-freetext" href="https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel">https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel</a>
</pre>
</blockquote>
<br>
</body>
</html>