[Nice] a Farsight 2 nice transmitter, a git repo and various related thougths

Kai.Vehmanen at nokia.com Kai.Vehmanen at nokia.com
Mon Apr 28 02:19:27 PDT 2008


Hi,

On 28 April 2008  Olivier Crête wrote:
>My unit test creates two pipelines (with two separate agents) 
>with each one stream and two candidates. Most of the time, 
>only one agent posts READY, the other one seems to not post 
>state changes... But sometimes, it works.. And once that agent 
>has posted ready, the data flow works correctly.

does the unit test of libnice still work? 

>I also added a nice_agent_gather_candidates() method that 
>starts the candidate gathering. So add_stream() just creates 
>the stream structure and returns the stream_id. What happened 
>was that signals with the stream id where emitted before the 
>stream id was returned... Now its two nicely separated 
>operations (which happens to nicely match the Farsight2 
>transmitter api..).

This sound good.

>If I add more than one stream, I have no way to know to which 
>stream the "gathering-done" signal relates to, it should 
>probably be a per-stream signal (emitted on a stream object, 
>more on that later).

Now this is on purpose. That signal is per-agent, not per-stream, 
as in the IETF-ICE spec, there is a lot of shared state between 
streams, and the library is written to adhere to this spec. If you 
want independent streams, you should create multiple agent instances, 
with one stream each. So libnice provides you with both options.

>There were two structures, NiceCandidate and NiceCandidateDesc 
>to represent candidates, I removed NiceCandidateDesc and use 
>NiceCandidate everywhere (they were both exposed in the API in anyway).

I kept these for possible jingle-backward-compatibility issues 
(for the two variants for adding candidates). So definitely ok to 
remove if current API sufficient for jingle.

>The various functions that return lists of candidates do 
>shallow copies, making it hard to use in a multi-threaded 
>context. They should make real copies. Same for the function 
>that gets the ufrag/password.

Well, they were not supposed to be used in multi-threaded
context. ;)

>I made the default buffer larger (64k), because they are used 
>to receive data UDP packets and we don't want them to be 
>truncated. Ideally, we'd just allocate the memory on demand 
>(that's what all the other gstreamer sources do anyway).

Hmm, this is somewhat tricky. Dynamic allocs are bad (as those
are on the hot-path most probably), but 64k is quite a bit of
memory as well. But, but, let's keep the 64k for now and 
optimize later (if needed). We could for instance let the client
provide the buffer.

>Youness also added some mutexes in the hope of making it thread-safe.
>That said, I'm still hitting some strange problems, probably 
>some kind of race that will have to be investigated further.

The state machines are rather complicated and adding multi-threading
will certainly create some races.. but the unit tests should catch
at least some of the cases (and if not, we need to extend the cases).

>The API would be much nicer if there was a GObject for each 
>stream (which already exist internally...) and all 
>functions/properties/signals would be moved to the streams. 
>Maybe except the timeout context.

So if you want this type of interface to streams, you should create
one NiceAgent per stream.

>I'm also a bit lost on what data exactly is shared between 
>streams in an agent? Shouldn't the result from one stream be 
>used to construct other streams to the same destination? Or not?

The spec [1] specifies quite a lot of dependencies:
  - the state machine runs over all the streams (state changes once
    one or all streams reach a certain status)
  - frozen algorithm
  - mapping to the offer-answer rounds

Now personally I'd be happy if there were less dependencies, but 
the spec is what it is and the library should allow implementing
a compliant client. OTOH, I'm first to admit that the implementation 
could be streamlined. The current implementation is a first iteration,
implemented while the spec was still changing, so some/many things
could be refactored. And especially as the spec is now frozen. But
some of the complexities do originate from the spec...

[1] http://tools.ietf.org/html/draft-ietf-mmusic-ice-19

>Also, if I have a multi-party conference. Should there be one 
>agent per peer? Or some other kind of arrangement?

Yep.

br,
-- 
first.surname at nokia.com (Kai Vehmanen)


More information about the Nice mailing list