trying to create a video chat app.

Peter Maersk-Moller pmaersk at gmail.com
Tue Jun 23 04:57:13 PDT 2015


So go ahead and create one single pipeline, one for machine A and one for
machine B, for capturing audio and video, encoding, muxing,
rtp-encapsulating and udp-sending.

Then create a single pipeline, one for host A and one for host B, for
udp-receiving, rtp-decap, demux, decode (audio and video) and play audio
and video with sync=true.

That'll give you minimal delay, although depending on codec, and it will
provide synchronization. Note that not all muxers can take all kind of
formats. gst-inspect-1.0 used on the muxer module name will list what kind
of formats it accept. Note that *sometimes*, running a parse after encoding
and before muxing can be necessary. Like this

 ... x264enc tune=zerolatency ! *h264parse* ! queue ! mpegtsmux name=muxer
! ..... faac ! *aacparse* ! queue ! mux.


On Tue, Jun 23, 2015 at 1:21 PM, Lab work <480437 at gmail.com> wrote:

> Hi Peter
>
> I was also thinking that synchronizing audio and video would be a problem.
>
>
> On Tue, Jun 23, 2015 at 4:07 PM, Peter Maersk-Moller <pmaersk at gmail.com>
> wrote:
>
>> Hi Nicholas.
>>
>> Unless you mux (multiplex) audio and video into a single multiplexed
>> stream or use rtspserver with two rtp streams and the rtspserver telling
>> clients how audio and video is synchronized or you use a more complicated
>> setup rtpsession/rtpbin and RTCP and NTP, you will not have audio and video
>> synchronized. Why not use a multiplexer like mpegtsmux and have audio/video
>> encoded and muxed and sent in a single pipeline and then in another
>> pipeline receiving multiplexed stream, demuxing it, decoding it and then
>> played?
>>
>> regards
>> Peter
>>
>> On Tue, Jun 23, 2015 at 8:28 AM, Lab work <480437 at gmail.com> wrote:
>>
>>> Hi
>>>
>>> Thanks for the replay.
>>> Your answer left me a bit confused.
>>>
>>> >Finally, each streams should be sent to it's down socket in RTP, you
>>> may multiplex them but this is a lot more work.
>>>
>>> Do you mean that I should not mux them and do something like:
>>>
>>> pipeline 1:
>>>
>>> gst-launch-1.0 v4l2src !  video/x-raw,width=640,height=480 !
>>>  timeoverlay !  tee name="local" !  queue !  autovideosink local. !  queue
>>> ! x264enc tune=zerolatency byte-stream=true bitrate=500 threads=1 !
>>> h264parse config-interval=1  ! rtph264pay !  udpsink host=192.168.100.3
>>> port= 5000 udpsrc port=5000
>>> caps=\"application/x-rtp,payload=96,encoding-name=H264\" ! queue !
>>> rtph264depay ! h264parse ! decodebin ! autovideosink
>>>
>>> pipeline 2 :
>>>
>>> gst-launch-1.0 pulsesrc ! audioconvert ! audioresample ! speexenc !
>>> rtpspeexpay ! udpsink host=192.168.100.3 port=4444 udpsrc port=4444
>>> caps="application/x-rtp, media=(string)audio, clock-rate=(int)16000,
>>> encoding-name=(string)SPEEX, encoding-params=(string)1, payload=(int)110" !
>>> rtpjitterbuffer ! rtpspeexdepay ! speexdec ! audioconvert ! audioresample !
>>> autoaudiosink
>>>
>>>
>>> And playing similar pipelines on other PC. I have tried this type of
>>> approach and the video part was working but audio part was creating problem.
>>> when I run the second pipeline there is no audio at either end while they
>>> work fine independently. Can you please explain the reason. And me out of
>>> this problem.
>>> Thank you.
>>>
>>> On Mon, Jun 22, 2015 at 7:32 PM, Nicolas Dufresne <
>>> nicolas.dufresne at collabora.com> wrote:
>>>
>>>> Le lundi 22 juin 2015 à 17:00 +0530, Lab work a écrit :
>>>> > gst-launch-1.0  oggmux name="muxer"  v4l2src ! video/x-raw,
>>>> > framerate=30/1, width=640, height=480 ! videoconvert ! x264enc !
>>>> > multiqueue ! muxer.  videotestsrc ! video/x-raw, framerate=30/1,
>>>> > width=640, height=480 ! videoconvert ! x264enc ! multiqueue ! muxer.
>>>> > autoaudiosrc ! audioconvert ! speexenc ! queue ! muxer.  udpsink
>>>> > host=127.0.0.1 port=5000
>>>>
>>>> So you want to stream in RTP over UDP. First you need to drop this
>>>> unlinked oggmux. OGG is not RTP. Also it does not work with variable
>>>> framerate (which most Logitech camera produces). Second, pick a preset
>>>> on x264enc that is suited for live (like tune=zerolatency). Finally,
>>>> each streams should be sent to it's down socket in RTP, you may multi
>>>> -plex them but this is a lot more work.
>>>>
>>>> Nicolas
>>>> _______________________________________________
>>>> gstreamer-devel mailing list
>>>> gstreamer-devel at lists.freedesktop.org
>>>> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>>>>
>>>>
>>>
>>> _______________________________________________
>>> gstreamer-devel mailing list
>>> gstreamer-devel at lists.freedesktop.org
>>> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>>>
>>>
>>
>> _______________________________________________
>> gstreamer-devel mailing list
>> gstreamer-devel at lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>>
>>
>
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20150623/bd334073/attachment-0001.html>


More information about the gstreamer-devel mailing list