Audio/voice conference

Honckiewicz, Filip BIS Filip.Honckiewicz at fs.utc.com
Wed Mar 26 03:46:47 PDT 2014


Hi,

I would like to ask you for help with creating multi-way voice conference. I created some gstreamer test pipelines with gst-launch:
1. Two-way voice between two machines. I used two pipelines (using rtp) for each client - first for sending from alsasrc to udpsink and second for receiving from udpsrc to alsasink.
2. Broadcasting from one client to many other clients (using rtp). It's simple sending from alsasrc to udpsink with broadcast address.

So the next step is to create multi-way pipeline/pipelines that every client would speak and hear each other.

But the main problem is how to receive audio from many clients and, secondly, how to mix the sound together? Should one client be a server which takes many input sounds, mix it and broadcast? I suppose broadcasting from every client would be problematic because of echo, etc. and it's not a good idea. I also think that creating separate pipeline for n-clients (n could be 20, 30) would kill devices (ARM based), cause as I observed, one receive pipeline when using gst-launch takes up to 10% of CPU (with PCMA codec). How implementing this using gstreamer libraries would lower CPU usage comparing to my gst-launch test pipelines?

Or maybe stop to even think about using gstreamer directly and use farstream like it was suggested by Jan Schmidt in "GStreamer for massive real-time voice chat" discussion?

Cheers!
Filip
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20140326/46caa613/attachment-0001.html>


More information about the gstreamer-devel mailing list