Talking clock - timestamp problem?

David Holroyd dave at badgers-in-foil.co.uk
Mon Aug 18 14:11:41 PDT 2014


Hi!

I am trying to build a pipeline something like a 'talking clock', 
continually
announcing the current date and time as test input to another system.

The pipeline I've built in Ruby (bindings via gir_ffi) looks like:

   appsrc ! festival ! wavparse ! audioconvert ! audioresample
     ! audio/x-raw,rate=48000,channels=2 ! audioconvert ! rtpL24pay
     ! udpsink

Buffers of text to be uttered are pushed into the appsrc, and I found 
that to
get the second and subsequent utterances through the pipeline, I needed
to set the state of wavparse to READY and back to PLAYING each time.

While I can hook the appropriate gst-launch pipleline up and listen to the
results, there are issues:

  - The RTP data for later utterances seems to be sent as fast as possible,
    rather than at the data rate of the media.

  - looking in GST_DEBUG=5 output, I think the timestamps in the later 
stages
    of the pipeline begin again from 0 at the start of each utterance, 
rather
    than continuing on from where we left off, or starting from the system
    time 'now'.  I have tried adding a 'pts' to the buffer pushed into 
the appsrc,
    but that didn't seem to help.

  - I would like the gaps between utterances to be filled with silence 
(at the
    moment there is just no RTP data sent).  I think I might be able to use
    the audiorate element to do this, but at the moment it does not have the
    expected effect, maybe because of the issues noted above.

If timestamps coming out of wavparse are the problem, is there a way to 
alter
the pipeline to fix this?


Many thanks!
dave



More information about the gstreamer-devel mailing list