[Nice] a Farsight 2 nice transmitter, a git repo and various related thougths

Olivier Crête olivier.crete at collabora.co.uk
Tue Apr 29 11:25:39 PDT 2008

On Mon, 2008-04-28 at 12:19 +0300, Kai.Vehmanen at nokia.com wrote:
> Hi,
> On 28 April 2008  Olivier Crête wrote:
> >My unit test creates two pipelines (with two separate agents) 
> >with each one stream and two candidates. Most of the time, 
> >only one agent posts READY, the other one seems to not post 
> >state changes... But sometimes, it works.. And once that agent 
> >has posted ready, the data flow works correctly.
> does the unit test of libnice still work? 


> >If I add more than one stream, I have no way to know to which 
> >stream the "gathering-done" signal relates to, it should 
> >probably be a per-stream signal (emitted on a stream object, 
> >more on that later).
> Now this is on purpose. That signal is per-agent, not per-stream, 
> as in the IETF-ICE spec, there is a lot of shared state between 
> streams, and the library is written to adhere to this spec. If you 
> want independent streams, you should create multiple agent instances, 
> with one stream each. So libnice provides you with both options.

Ok, I didn't read IETF-ICE in exact details...

> >There were two structures, NiceCandidate and NiceCandidateDesc 
> >to represent candidates, I removed NiceCandidateDesc and use 
> >NiceCandidate everywhere (they were both exposed in the API in anyway).
> I kept these for possible jingle-backward-compatibility issues 
> (for the two variants for adding candidates). So definitely ok to 
> remove if current API sufficient for jingle.

Ok, excellent. Actually, NiceCandidate is the more complete one (with
the username/password in the candidate).

> >The various functions that return lists of candidates do 
> >shallow copies, making it hard to use in a multi-threaded 
> >context. They should make real copies. Same for the function 
> >that gets the ufrag/password.
> Well, they were not supposed to be used in multi-threaded
> context. ;)

Well well.. now it is ;)

> >I made the default buffer larger (64k), because they are used 
> >to receive data UDP packets and we don't want them to be 
> >truncated. Ideally, we'd just allocate the memory on demand 
> >(that's what all the other gstreamer sources do anyway).
> Hmm, this is somewhat tricky. Dynamic allocs are bad (as those
> are on the hot-path most probably), but 64k is quite a bit of
> memory as well. But, but, let's keep the 64k for now and 
> optimize later (if needed). We could for instance let the client
> provide the buffer.

If you're using glib/gstreamer, you're going to have tons of dynamic
allocs. If your allocator is not fast enough, you've already lost. In
the current case, the client allocs the buffer, but it would be really
nice if the api provided a way to tell the client how much it should

> >Youness also added some mutexes in the hope of making it thread-safe.
> >That said, I'm still hitting some strange problems, probably 
> >some kind of race that will have to be investigated further.
> The state machines are rather complicated and adding multi-threading
> will certainly create some races.. but the unit tests should catch
> at least some of the cases (and if not, we need to extend the cases).

Well, the unit tests are not multi-threaded, we should write some
threaded tests.

> >The API would be much nicer if there was a GObject for each 
> >stream (which already exist internally...) and all 
> >functions/properties/signals would be moved to the streams. 
> >Maybe except the timeout context.
> So if you want this type of interface to streams, you should create
> one NiceAgent per stream.

Ok, I change my code to create separate agent is separate properties are

> >I'm also a bit lost on what data exactly is shared between 
> >streams in an agent? Shouldn't the result from one stream be 
> >used to construct other streams to the same destination? Or not?
> The spec [1] specifies quite a lot of dependencies:
>   - the state machine runs over all the streams (state changes once
>     one or all streams reach a certain status)
>   - frozen algorithm
>   - mapping to the offer-answer rounds
> Now personally I'd be happy if there were less dependencies, but 
> the spec is what it is and the library should allow implementing
> a compliant client. OTOH, I'm first to admit that the implementation 
> could be streamlined. The current implementation is a first iteration,
> implemented while the spec was still changing, so some/many things
> could be refactored. And especially as the spec is now frozen. But
> some of the complexities do originate from the spec...
> [1] http://tools.ietf.org/html/draft-ietf-mmusic-ice-19

I agree following the spec as closely as possible is most important.

Olivier Crête
olivier.crete at collabora.co.uk
Collabora Ltd
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
Url : http://lists.freedesktop.org/archives/nice/attachments/20080429/61448b3e/attachment.pgp 

More information about the Nice mailing list