Add audio and record RTP stream.
Andrew Borntrager
andrew.borntrager1 at gmail.com
Thu Jul 7 15:51:48 UTC 2016
Thank you for the rapid response. I am trying to use the built in
microphone in the webcam. I obtained the device property using pactl list
short sources (thanks for the great tip) and inserted as follows:
gst-launch-1.0 -v vl2src device=/dev/video0 ! queue ! -e pulserc
device="alsa_input.usb-046d_HD_Pro_Webcam_C920_118F5B1F-02-C920.analog-stereo"
! queue ! video/x-h264, width=1280, height=720, framerate=15/1 ! queue !
h264parse ! queue ! rtph264pay pt=127 config-interval=4 ! udpsink
host=***********.ddns.net port=5000
Does anyone know where I went wrong?
On Thu, Jul 7, 2016 at 9:22 AM, Nicolas Dufresne <nicolas.dufresne at gmail.com
> wrote:
> Le jeudi 07 juillet 2016 à 08:31 -0400, Andrew Borntrager a écrit :
> > Hi all! I have a working pipeline running on a Raspberry Pi:
> >
> > gst-launch-1.0 -v vl2src device=/dev/video0 ! queue ! video/x-h264,
> > width=1280, height=720, framerate=15/1 ! queue ! h264parse ! queue !
> > rtph264pay pt=127 config-interval=4 ! udpsink
> > host=***********.ddns.net port=5000
> >
> > I have a windows laptop with this:
> >
> > gst-launch-1.0 udpsrc caps="application/x-rtp, media=(string)video,
> > clock-rate=(int) 90000, encoding-name=(string)H264,
> > sampling=(string)YCbCr-4:4:4, depth=(string)8, width=(string)320,
> > height=(string)240, payload=(int)96, clock-base=(uint)4068866987,
> > seqnum-base=(uint)24582" port=5000 ! rtph264depay ! decodebin !queue!
> > autovideosink
>
> You should use rtpjitterbuffer and set it's latency property, right
> after each udpsrc. It will remove the burst effect and ensure sync
> between audio and video.
>
> >
> > This works very well. However, I would like to add audio (Im using a
> > Logitech C920 webcam). Also, i would like to record incoming stream
> > directly to laptop (possibly the "tee" command??). Any latency
> > optimizations would be greatly appreciated, even if it means
> > momentary degradation of video. I'm kind of a newbie, so copy/paste
> > and inserting into my pipeline is also greatly appreciated!
>
> For the audio part, it's pretty much the same method. It depends on
> your raspery pi setup. There is 2 possible audio source, alsasrc, for
> which you need to set the device property based on the output of
> arecord -L, or pulsesrc which also have a device property and the list
> of sources can be opbained using "pactl list short sources". Most
> people uses OPUS to encode audio, the clock rate is always 48000.
>
> cheers,
> Nicolas
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20160707/18e01ba1/attachment-0001.html>
More information about the gstreamer-devel
mailing list