[gst-devel] typefind and modifying a running stream (python bindings)
dlenski at gmail.com
Mon Sep 10 17:34:28 CEST 2007
Sorry if this is the wrong place to ask questions about code written with
the Python bindings... there doesn't seem to be a python-gst mailing list,
so I figured I'd give this a shot.
I'm new to gstreamer and trying to write some code that will slurp a stream
off the web into a file, and transcode it to MP3 if not already in that
format. The application is to automatically convert it to a form that most
portable audio players can understand.
I have some working code, but not exactly sure "why" it works ;-)
Basically, what I do is I set up a gnomevfssrc ! typefind pipeline, and
connect the have-type signal of the typefind element. Then I start the
pipeline playing. When the have-type signal is received, I pause the
pipeline, and then:
(a) if the detected stream is MP3, I make an identity element as the
transcoder: transcoder = identity
(b) if it's not MP3, I make a real transcoder: transcoder = decodebin !
audioconvert ! lame
Next, I link this transcoder to the end of the pipeline and link that to a
filesink, so I end up with:
gnomevfssrc ! typefind ! transcoder ! filesink
Finally, I unpause the pipeline, and the MP3 output stream magically goes
into the output file. Hooray!
Here are a few things I'm trying to understand:
* Are there any subtleties to modifying a running pipeline? When I pause
the pipeline, append to it, and then unpause it, should I be worried about
data getting lost, or buffer overflows, or anything like that? This doesn't
seem to be a problem, but maybe I've just got lucky.
* If I add() the transcoder and sink elements to the pipeline BEFORE I'm
ready to actually link their pads, then the pipeline runs, and typefind
works... but I get no output. Why?
* Typefind identifies MP3 files as application/id3 sometimes, rather than
audio/mpeg. Is there a more correct and reliable way to identify an MP3
* How can I identify the duration of the stream processed by the pipeline?
pipeline.get_clock() seems to keep track of the REAL time, rather than the
amount of audio processed.
Any help will be appreciated! Thanks a lot!
PS- The relevant parts of my code so far:
# a bin containing decodebin, audioconvert, and lame
# to auto-convert any audio stream to MP3
def __init__(self, uri, filename):
# stream reader
self.stream = gst.element_factory_make('gnomevfssrc')
# automatically identify stream type
self.typefind = gst.element_factory_make('typefind')
# convert arbitrary audio stream to MP3, or do nothing to an MP3
self.transcode = transcode_to_mp3()
self.identity = gst.element_factory_make('identity')
# write to file output
self.sink = gst.element_factory_make('filesink')
# partially assemble pipeline
self.pipeline = gst.Pipeline()
def autoconfigure(self, tf, prob, caps):
ismp3 = (caps.get_name() == 'audio/mpeg' and caps['layer']==3)
if ismp3: stage = self.identity
else: stage = self.mp3ify
gst.element_link_many(self.typefind, stage, self.sink)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the gstreamer-devel