trying to create a video chat app.
480437 at gmail.com
Tue Jun 23 04:21:31 PDT 2015
I was also thinking that synchronizing audio and video would be a problem.
On Tue, Jun 23, 2015 at 4:07 PM, Peter Maersk-Moller <pmaersk at gmail.com>
> Hi Nicholas.
> Unless you mux (multiplex) audio and video into a single multiplexed
> stream or use rtspserver with two rtp streams and the rtspserver telling
> clients how audio and video is synchronized or you use a more complicated
> setup rtpsession/rtpbin and RTCP and NTP, you will not have audio and video
> synchronized. Why not use a multiplexer like mpegtsmux and have audio/video
> encoded and muxed and sent in a single pipeline and then in another
> pipeline receiving multiplexed stream, demuxing it, decoding it and then
> On Tue, Jun 23, 2015 at 8:28 AM, Lab work <480437 at gmail.com> wrote:
>> Thanks for the replay.
>> Your answer left me a bit confused.
>> >Finally, each streams should be sent to it's down socket in RTP, you
>> may multiplex them but this is a lot more work.
>> Do you mean that I should not mux them and do something like:
>> pipeline 1:
>> gst-launch-1.0 v4l2src ! video/x-raw,width=640,height=480 ! timeoverlay
>> ! tee name="local" ! queue ! autovideosink local. ! queue ! x264enc
>> tune=zerolatency byte-stream=true bitrate=500 threads=1 ! h264parse
>> config-interval=1 ! rtph264pay ! udpsink host=192.168.100.3 port= 5000
>> udpsrc port=5000 caps=\"application/x-rtp,payload=96,encoding-name=H264\" !
>> queue ! rtph264depay ! h264parse ! decodebin ! autovideosink
>> pipeline 2 :
>> gst-launch-1.0 pulsesrc ! audioconvert ! audioresample ! speexenc !
>> rtpspeexpay ! udpsink host=192.168.100.3 port=4444 udpsrc port=4444
>> caps="application/x-rtp, media=(string)audio, clock-rate=(int)16000,
>> encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" !
>> rtpjitterbuffer ! rtpspeexdepay ! speexdec ! audioconvert ! audioresample !
>> And playing similar pipelines on other PC. I have tried this type of
>> approach and the video part was working but audio part was creating problem.
>> when I run the second pipeline there is no audio at either end while they
>> work fine independently. Can you please explain the reason. And me out of
>> this problem.
>> Thank you.
>> On Mon, Jun 22, 2015 at 7:32 PM, Nicolas Dufresne <
>> nicolas.dufresne at collabora.com> wrote:
>>> Le lundi 22 juin 2015 à 17:00 +0530, Lab work a écrit :
>>> > gst-launch-1.0 oggmux name="muxer" v4l2src ! video/x-raw,
>>> > framerate=30/1, width=640, height=480 ! videoconvert ! x264enc !
>>> > multiqueue ! muxer. videotestsrc ! video/x-raw, framerate=30/1,
>>> > width=640, height=480 ! videoconvert ! x264enc ! multiqueue ! muxer.
>>> > autoaudiosrc ! audioconvert ! speexenc ! queue ! muxer. udpsink
>>> > host=127.0.0.1 port=5000
>>> So you want to stream in RTP over UDP. First you need to drop this
>>> unlinked oggmux. OGG is not RTP. Also it does not work with variable
>>> framerate (which most Logitech camera produces). Second, pick a preset
>>> on x264enc that is suited for live (like tune=zerolatency). Finally,
>>> each streams should be sent to it's down socket in RTP, you may multi
>>> -plex them but this is a lot more work.
>>> gstreamer-devel mailing list
>>> gstreamer-devel at lists.freedesktop.org
>> gstreamer-devel mailing list
>> gstreamer-devel at lists.freedesktop.org
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the gstreamer-devel