How to create one pipeline for both audio-video stream and only video stream.

alexey burov burov_alexey at mail.ru
Fri Feb 3 14:40:18 UTC 2017


keepingitneil, thanks a lot for your reply and your advice. Your advice
helped me.

As far as I understood gstreamer rtsp server creates GstRTSPStream client
instances from pipeline description at first. And only then it requests to a
source camera (for the pipeline described above). 
When rtsp server gets stream from the source camera, it is impossible add or
change already created GstRTSPStream client instance.
Thats why I create an audio stream with silence at first.
If camera audio stream exists I change audio part of pipeline in 'pad-added'
callback of rstpsrc element.
I replace the silence element by the audio stream from camera.

Maybe better solution exists ...


The part of my code (it worked), perhaps it will be useful for someone:


def rtspsrc_on_pad_added2(element, new_pad):
    string = new_pad.query_caps(None).to_string()
    pipeline = element.get_parent()
    if 'media=(string)video' in string:
        videobin = pipeline.get_by_name('videobin')
        new_pad.link(videobin.get_static_pad('sink'))
    elif 'media=(string)audio' in string:
        audiobin = pipeline.get_by_name('audiobin')
        audiobin.change_audio_source()
        new_pad.link(audiobin.get_static_pad('sink'))


class AudioBin(Gst.Bin):

    def __init__(self):
        Gst.Bin.__init__(self)
        self.set_property('name', 'audiobin')

        incoming_queue = Gst.ElementFactory.make('queue', 'incoming_queue')

        # fake source
        silence = Gst.ElementFactory.make('audiotestsrc', 'silence')

        # encoder
        mulawenc = Gst.ElementFactory.make('mulawenc', 'mulawenc')
        rtppcmupay = Gst.ElementFactory.make('rtppcmupay', 'pay1')

        rtppcmupay.set_property('pt', 97)
        silence.set_property('wave', 4)

        self.add(incoming_queue)
        self.add(silence)
        self.add(mulawenc)
        self.add(rtppcmupay)

        silence.link(mulawenc)
        mulawenc.link(rtppcmupay)

        self.add_pad(Gst.GhostPad.new('sink',
incoming_queue.get_static_pad('sink')))

    def change_audio_source(self):
        silence = self.get_by_name('silence')
        silence.set_state(Gst.State.NULL)
        self.remove(silence)

        incoming_queue = self.get_by_name('incoming_queue')
        mulawenc = self.get_by_name('mulawenc')
        audio_decoder = Gst.ElementFactory.make('decodebin',
'audio_decoder')

        self.add(audio_decoder)
        
        audio_decoder.connect('pad-added', decodebin_on_pad_added, mulawenc)

        incoming_queue.link(audio_decoder)
        
        success, current_state, _pending =
self.get_state(Gst.CLOCK_TIME_NONE)
        if success != Gst.StateChangeReturn.SUCCESS:
            raise Exception()
        print(current_state)
        audio_decoder.set_state(current_state)




--
View this message in context: http://gstreamer-devel.966125.n4.nabble.com/How-to-create-one-pipeline-for-both-audio-video-stream-and-only-video-stream-tp4681575p4681706.html
Sent from the GStreamer-devel mailing list archive at Nabble.com.


More information about the gstreamer-devel mailing list