How to sync audio and video when playing back samples generated by appsink?
Shaf
shaf.nttf at gmail.com
Fri May 31 02:49:33 UTC 2019
I am also facing the similar issue. One hint may be to use the clock time on
the sample buffered from the sink pads and push them to the appsrc :
I am using 2 different methods to push the buffer for audio and video in the
"need-data" signal of appsrc
*For Video*
buffer = gst_sample_get_buffer(sample);
GST_BUFFER_PTS(buffer) = ctx->timestamp;
GST_BUFFER_DURATION(buffer) = gst_util_uint64_scale_int(rate,
GST_SECOND, frames);
ctx->timestamp += GST_BUFFER_DURATION(buffer);
g_signal_emit_by_name(appsrc, "push-buffer", buffer, &ret);
*Note *: /I use the framerate per seconds (got from the video caps) to
segment the buffer/
*For Audio :*
buffer = gst_sample_get_buffer(sample);
GstSegment *seg = gst_sample_get_segment(sample);
GstClockTime pts, dts;
/* Convert the PTS/DTS to running time so they start from 0 */
pts = GST_BUFFER_PTS(buffer);
if (GST_CLOCK_TIME_IS_VALID(pts))
pts = gst_segment_to_running_time(seg, GST_FORMAT_TIME, pts);
dts = GST_BUFFER_DTS(buffer);
if (GST_CLOCK_TIME_IS_VALID(dts))
dts = gst_segment_to_running_time(seg, GST_FORMAT_TIME, dts);
if (buffer) {
/* Make writable so we can adjust the timestamps */
buffer = gst_buffer_copy(buffer);
GST_BUFFER_PTS(buffer) = pts;
GST_BUFFER_DTS(buffer) = dts;
g_signal_emit_by_name(appsrc, "push-buffer", buffer, &ret);
}
I am not sure how to make this same for audio and video. My guess is that
constructing buffers of same clock time can make the audio and video in
sync. But it all depends on appsink data.
If you have succeeded in syncing audio and video, please let me know.
--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/
More information about the gstreamer-devel
mailing list