Problems trying to capture RTSP stream

Gary Thomas gary at mlbassoc.com
Wed Feb 6 06:46:32 PST 2013


On 2013-01-31 08:29, Gary Thomas wrote:
> I'm trying to be able to both play (to my desktop) and record
> an RTSP stream.  My source RTSP stream looks like this:
>    rtsp-server "(v4l2src device=/dev/video1 queue-size=16  \
>                    ! video/x-raw-yuv,format=(fourcc)UYVY,width=720,height=480,framerate=30/1 \
>                    ! videoscale \
>                    ! video/x-raw-yuv,format=(fourcc)UYVY,width=320,height=240 \
>                    ! ffmpegcolorspace \
>                    ! video/x-raw-yuv,format=(fourcc)NV12 \
>                    ! queue ! ducatih264enc ! rtph264pay name=pay0 pt=96  \
>                  alsasrc device=hw:0,1 ! queue ! audioconvert ! ffenc_mp2 ! rtpmpapay name=pay1 pt=97
>    )"
> n.b. the 'ducatih264enc' element is a hardware H.264 encoder.
>
> This command works well for playing the audio+video stream:
>    gst-launch rtspsrc location=rtsp://192.168.1.136:8554/test name=src \
>      src. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! xvimagesink \
>      src. ! queue ! rtpmpadepay ! mpegaudioparse ! flump3dec ! audioconvert ! alsasink
>
> However, when I try to record the video as well using this pipeline:
>    gst-launch -e rtspsrc location=rtsp://192.168.1.136:8554/test name=src \
>      src. ! tee name=v-t \
>        v-t. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! xvimagesink \
>        v-t. ! queue ! rtph264depay ! h264parse ! mp4mux ! filesink location=/tmp/video0.mp4 \
>      src. ! queue ! rtpmpadepay ! mpegaudioparse ! flump3dec ! audioconvert ! alsasink
>
> it fails with this message (a bit reformatted for clarity):
>    <mp4mux0> pad video_00 refused renegotiation
>          to video/x-h264, stream-format=(string)avc, alignment=(string)au, width=(int)320, height=(int)240,
>            parsed=(boolean)true, codec_data=(buffer)01640028ffe1000927640028acca8141f901000528de01ae2c
>          from video/x-h264, stream-format=(string)avc, alignment=(string)au, width=(int)320, height=(int)240,
>            parsed=(boolean)true, codec_data=(buffer)01640028ffe1000a27640028acca8141f90001000628de01ae2c0
>
> I can run this command on my server machine (where the camera is) and record
> just fine:
>    gst-launch -e mp4mux name=mux ! filesink location=/dcim/Camera/2013-01-31.14:35:25.mp4 \
>      v4l2src device=/dev/video1 queue-size=16  \
>          ! video/x-raw-yuv,format='(fourcc)UYVY',width=720,height=480,framerate=30/1 \
>          ! ffmpegcolorspace \
>          ! video/x-raw-yuv,format='(fourcc)NV12' \
>          ! queue ! ducatih264enc ! h264parse ! mux.  \
>      alsasrc device=hw:0,1 ! queue ! audioconvert ! ffenc_alac ! mux.
>
> Since that pipeline works, I thought maybe the problem is that the mux wasn't
> being fed both audio and video streams so I changed my original pipeline to be:
>    gst-launch -e rtspsrc location=rtsp://192.168.1.136:8554/test name=src \
>      mp4mux name=av-mux ! filesink location=/tmp/video0.mp4 \
>      src. ! tee name=v-t \
>        v-t. ! queue ! rtph264depay ! h264parse ! ffdec_h264 ! xvimagesink \
>        v-t. ! queue ! rtph264depay ! h264parse ! av-mux. \
>      src. ! tee name=a-t \
>        a-t. ! queue ! rtpmpadepay ! mpegaudioparse ! flump3dec ! audioconvert ! alsasink \
>        a-t. ! queue ! audioconvert ! ffenc_alac ! av-mux.
>
> However this only yields an "Internal dataflow error" without any supporting information.
> I turned on the debugging - the log is at http://www.mlbassoc.com/misc/gst.log
>
> Any ideas what I'm doing wrong and how to make this process work?  Maybe I have the
> tee(s) in the wrong place or similar structural errors?

Does anyone have an idea what my problem here is?  This is my last hurdle to getting
my video streaming application totally working (and I'd like to get it done!)

>
> Once I get this going, I'm going to want to be able to selectively pick and choose
> how this pipeline gets built.  Of course I'll be creating the pipeline in Python
> and I need to figure out how to add the video parts only if the source stream
> has video (it can have audio, video, or both), maybe recording locally, maybe
> not, etc.  Any pointers on how to be able to do that would also be appreciated.

This part I've figured out by creating the various pieces in separate
'bins' and stitching them together only when needed.

Thanks for the help

-- 
------------------------------------------------------------
Gary Thomas                 |  Consulting for the
MLB Associates              |    Embedded world
------------------------------------------------------------


More information about the gstreamer-devel mailing list