<div dir="ltr">Thank you for the rapid response. I am trying to use the built in microphone in the webcam. I obtained the device property using pactl list short sources (thanks for the great tip) and inserted as follows:<div><br><div> gst-launch-1.0 -v vl2src device=/dev/video0 ! queue ! -e pulserc device="alsa_input.usb-046d_HD_Pro_Webcam_C920_118F5B1F-02-C920.analog-stereo" ! queue ! video/x-h264, width=1280, height=720, framerate=15/1 ! queue ! h264parse ! queue ! rtph264pay pt=127 config-interval=4 ! udpsink host=***********.<a href="http://ddns.net">ddns.net</a> port=5000</div><div><br></div><div>Does anyone know where I went wrong?</div><div><br></div><div><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Jul 7, 2016 at 9:22 AM, Nicolas Dufresne <span dir="ltr"><<a href="mailto:nicolas.dufresne@gmail.com" target="_blank">nicolas.dufresne@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">Le jeudi 07 juillet 2016 à 08:31 -0400, Andrew Borntrager a écrit :<br>
> Hi all! I have a working pipeline running on a Raspberry Pi:<br>
><br>
> gst-launch-1.0 -v vl2src device=/dev/video0 ! queue ! video/x-h264,<br>
> width=1280, height=720, framerate=15/1 ! queue ! h264parse ! queue !<br>
> rtph264pay pt=127 config-interval=4 ! udpsink<br>
> host=***********.<a href="http://ddns.net" rel="noreferrer" target="_blank">ddns.net</a> port=5000<br>
><br>
> I have a windows laptop with this:<br>
><br>
> gst-launch-1.0 udpsrc caps="application/x-rtp, media=(string)video,<br>
> clock-rate=(int) 90000, encoding-name=(string)H264,<br>
> sampling=(string)YCbCr-4:4:4, depth=(string)8, width=(string)320,<br>
> height=(string)240, payload=(int)96, clock-base=(uint)4068866987,<br>
> seqnum-base=(uint)24582" port=5000 ! rtph264depay ! decodebin !queue!<br>
> autovideosink<br>
<br>
</span>You should use rtpjitterbuffer and set it's latency property, right<br>
after each udpsrc. It will remove the burst effect and ensure sync<br>
between audio and video.<br>
<span class=""><br>
><br>
> This works very well. However, I would like to add audio (Im using a<br>
> Logitech C920 webcam). Also, i would like to record incoming stream<br>
> directly to laptop (possibly the "tee" command??). Any latency<br>
> optimizations would be greatly appreciated, even if it means<br>
> momentary degradation of video. I'm kind of a newbie, so copy/paste<br>
> and inserting into my pipeline is also greatly appreciated! <br>
<br>
</span>For the audio part, it's pretty much the same method. It depends on<br>
your raspery pi setup. There is 2 possible audio source, alsasrc, for<br>
which you need to set the device property based on the output of<br>
arecord -L, or pulsesrc which also have a device property and the list<br>
of sources can be opbained using "pactl list short sources". Most<br>
people uses OPUS to encode audio, the clock rate is always 48000.<br>
<br>
cheers,<br>
Nicolas<br>_______________________________________________<br>
gstreamer-devel mailing list<br>
<a href="mailto:gstreamer-devel@lists.freedesktop.org">gstreamer-devel@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel" rel="noreferrer" target="_blank">https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel</a><br>
<br></blockquote></div><br></div>