[gst-devel] [Sofia-sip-devel] creating client with own media

Rob Taylor robtaylor at floopily.org
Tue Aug 8 14:26:31 CEST 2006


CC'ing gstreamer-devel as I expect there will be interested parties there.

Johannes Eickhold wrote:
> On Fri, 2006-08-04 at 20:43 +0400, Sergey Vointsev wrote:
>> Good day!
> 
> Hi Sergey.

<snip>

>> And here comes my question: as I understand, everything dealing sound
>> compressing or input/output is contrelled by gstreamer library. But
>> who is in charge for RTP?
> 
> On the Nokia 770 the GStreamer elements from gst-plugins-farsight are
> used for RTP. For other platforms have a look in the README file that
> comes with the sofsip-cli source tar ball or is directly available from
> the darcs repository here:
> http://sofia-sip.org/cgi-bin/darcs.cgi/sofsip-cli/README?c=annotate
> There are sections about the different media implementations available
> in sofsip-cli.
> 
> A media implementation builds a pipeline of GStreamer elements that
> involves two types of elements. On the one hand elements that build the
> audio source and sink and the codec to be used and on the other hand
> elements that pack chunks of the audio signal into RTP packets and
> sends/receives them over the net. Each user agent has to build two such
> pipelines, one for receiving audio and one for sending.
> 
> As I understand it, farsight is an abstraction API that when used
> completely hides the pipeline building and elements to be used behind
> simple function calls. But this kind of media implementation is
> currently not available.

Farsight *is* the media implementation used on the Nokia 770. It works
very well, as you can see by making a voice call...

Farsight basically gives you ICE connectivity and will do all the
gstreamer work to set up pipelines based on your installed payloader and
codecs gst elements, change codecs on different payload types, and is
basically designed to be a place for everyone to contribute their
best-of-breed streaming handling.

> A few days back robtaylor who is one of farsight's developers explained
> in IRC that the rtpjitterbuffer has a realy bad implementation in the
> sense of bad RTCP und jitter correction. 

Eh? RTCP is fine using the rtpbin from gst-plugins-farsight, and this is
unrelated to jitter correction. The part that's missing at the moment
for RTCP is a way to report the data from RTCP up to the application, or
up/down the pipeline to codec elements.

The rtpjitterbuffer is certainly suboptimal at the moment, mainly as we
really need to start investigating running the pipelines pull from
source to jitter puffer and push from socket to jitter buffer.

> He also stated that all good
> approaches which involve dynamic buffer adaption to correct jitter in
> voice traffic over RTP are protected by patents.

I actually said that I've *heard* that *most* dynamic buffer adaptation
algorithms are patented...


> Kai Vehmanen pointed me to this ticket
> http://projects.collabora.co.uk/trac/farsight/ticket/1 which talks about
> missing "push" strategy in gstreamer's elements that are currently
> involved in any voice audio over RTP pipeline in gstreamer.
> 
> Hope this helps and others can provide further details. I'm very
> interested in this topic, too.
> 

Anyone interested in using gstreamer to do RTP (especially
peer-to-peer), I'd invite to try out farsight - user and developers can
all be reached on the mailing list [1]
or on IRC at #farsight on Freenode.

Farsight is currently good enough for commercial deployment (as shown by
Nokia), but there's a lot that could be done to improve it (at least, in
theory). We're very open to people who'd like to help out :)

I should also add that for those who'd prefer paid support my company,
Collabora Ltd, provides full commercial support and development services
[2].

Thanks,
Rob Taylor

[1] http://sourceforge.net/mail/?group_id=120931
[2] http://collabora.co.uk




More information about the gstreamer-devel mailing list