I am using gstreamer 0.10.30, good plugins 0.10.25. I am implementing support for rtsp input in a C++ application. The initial source that I am working with has both video and audio. Currently the video works.<br><br>I initially create the rtspsrc element and add it to the pipeline. In the new pad added callback, I create a tee, an input-selector, a mpegtsmux and a udpsink. I link the 'T' to the rtspsrc, the input-selector to the mpegtsmux and the mux to the udpsink. I am converting RTP to MP2T for distribution. I am trying to put both the audio and video in the MP2T stream.<br>
<br>The first rtspsrc pad created is for the video stream. I create a queue, the rtp jitter buffer and the h264depayloader, link the queue to the 'T', jitter buffer to the queue, depayloader to the jitterbuffer, input-selector to the jitterbuffer. Video flows and displays on a remote system.<br>
<br>The second callback is for the audio stream. I create a second queue and a mp4gdepayloader. I have created a second jitter buffer but that didn't make any difference. I link the queue to the 'T' but fail to link the mp4gdepay to the second queue. When I enabled logging for GST_PADS:4, the caps from the 'T' are the video stream caps. I think that these are passed down to the second queue, causing the mp4gdepay link step to fail due to incompatible caps. My intent was to link the 'T' to the second queue to the mp4gdepay to the input-selector (select-all set to TRUE) to the mpegtsmux to the udpsink, blending the video and audio into the mpeg output.<br>
<br>1. Is this approach reasonable? I don't have much experience with MPEG streams. If not, how does one go about putting both audio and video into a MPEGTS stream?<br>2. How have I screwed up the 'T' with the caps? I tried linking the audio branch with filtered caps but that didn't make any difference.<br>
<br>Thank you for all suggestions!<br><br>Chuck Crisler<br>