[gst-devel] typefind and modifying a running stream (python bindings)
Daniel Lenski
dlenski at gmail.com
Mon Sep 10 17:34:28 CEST 2007
Hi all,
Sorry if this is the wrong place to ask questions about code written with
the Python bindings... there doesn't seem to be a python-gst mailing list,
so I figured I'd give this a shot.
I'm new to gstreamer and trying to write some code that will slurp a stream
off the web into a file, and transcode it to MP3 if not already in that
format. The application is to automatically convert it to a form that most
portable audio players can understand.
I have some working code, but not exactly sure "why" it works ;-)
Basically, what I do is I set up a gnomevfssrc ! typefind pipeline, and
connect the have-type signal of the typefind element. Then I start the
pipeline playing. When the have-type signal is received, I pause the
pipeline, and then:
(a) if the detected stream is MP3, I make an identity element as the
transcoder: transcoder = identity
(b) if it's not MP3, I make a real transcoder: transcoder = decodebin !
audioconvert ! lame
Next, I link this transcoder to the end of the pipeline and link that to a
filesink, so I end up with:
gnomevfssrc ! typefind ! transcoder ! filesink
Finally, I unpause the pipeline, and the MP3 output stream magically goes
into the output file. Hooray!
Here are a few things I'm trying to understand:
* Are there any subtleties to modifying a running pipeline? When I pause
the pipeline, append to it, and then unpause it, should I be worried about
data getting lost, or buffer overflows, or anything like that? This doesn't
seem to be a problem, but maybe I've just got lucky.
* If I add() the transcoder and sink elements to the pipeline BEFORE I'm
ready to actually link their pads, then the pipeline runs, and typefind
works... but I get no output. Why?
* Typefind identifies MP3 files as application/id3 sometimes, rather than
audio/mpeg. Is there a more correct and reliable way to identify an MP3
stream?
* How can I identify the duration of the stream processed by the pipeline?
pipeline.get_clock() seems to keep track of the REAL time, rather than the
amount of audio processed.
Any help will be appreciated! Thanks a lot!
Dan Lenski
PS- The relevant parts of my code so far:
class transcode_to_mp3(gst.Bin):
# a bin containing decodebin, audioconvert, and lame
# to auto-convert any audio stream to MP3
...
class StreamRecorder:
def __init__(self, uri, filename):
# stream reader
self.stream = gst.element_factory_make('gnomevfssrc')
self.stream.set_property('location', uri)
# automatically identify stream type
self.typefind = gst.element_factory_make('typefind')
self.typefind.set_property('minimum', 100)
self.typefind.connect('have-type', self.autoconfigure)
# convert arbitrary audio stream to MP3, or do nothing to an MP3
self.transcode = transcode_to_mp3()
self.identity = gst.element_factory_make('identity')
# write to file output
self.sink = gst.element_factory_make('filesink')
self.sink.set_property('location', filename)
# partially assemble pipeline
self.pipeline = gst.Pipeline()
self.pipeline.add(self.stream, self.typefind)
self.stream.link(self.typefind)
def autoconfigure(self, tf, prob, caps):
self.pipeline.set_state(gst.STATE_PAUSED)
ismp3 = (caps[0].get_name() == 'audio/mpeg' and caps[0]['layer']==3)
if ismp3: stage = self.identity
else: stage = self.mp3ify
self.pipeline.add(stage, self.sink)
gst.element_link_many(self.typefind, stage, self.sink)
self.pipeline.set_state(gst.STATE_PLAYING)
def start(self):
self.pipeline.set_state(gst.STATE_PLAYING)
def stop(self):
self.pipeline.set_state(gst.STATE_NULL)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20070910/c36b030d/attachment.htm>
More information about the gstreamer-devel
mailing list