[Nice] public API to get selected pair

Bryce Allen ballen at ci.uchicago.edu
Wed Jan 30 09:54:48 PST 2013


Comments inline.

On Wed, 23 Jan 2013 16:13:25 -0500
Youness Alaoui <youness.alaoui at collabora.co.uk> wrote:
> On 01/22/2013 06:24 PM, Bryce Allen wrote:
> > Thanks for the quick reply, comments inline.
> > 
> > On Tue, 22 Jan 2013 17:38:18 -0500
> > Olivier Crête <olivier.crete at collabora.com> wrote:
> >> Hello,
> >>
> >> On Tue, 2013-01-22 at 16:03 -0600, Bryce Allen wrote:
> >>> I notice that there is nice_agent_set_selected_pair, but not
> >>> get_selected_pair.
> >>
> >> You can listen to the "new-selected-pair" signal, it will tell you
> >> which candidates have been selected.. But I agree it would be good
> >> to add such an API for completeness.
> > The signal gives you the foundations - it wasn't immediately
> > obvious to me that the foundation would uniquely identify the
> > candidate, but it does appear to be the case looking at the
> > internal code. So I just need to iterate over
> > nice_get_remote_candidates and nice_get_remote_candidates, doing
> > strncmp on the foundations until I find the match for each?
> > 
> Yes, as per the ICE specification, the foundation must uniquely
> identify each candidate. You should be able to use the
> get_local_candidates and get_remote_candidates to know which one
> candidate has been selected. Make sure you do call
> get_local_candidates and get_remote_candidates and not use the
> previously received values because both local and remote candidates
> can change dynamically during the connecting phase (addition of
> peer-reflexive candidates).
This was not obvious to me from reading the spec, I guess because
foundations can be the same across different components. I couldn't
find a clear statement that they have to be unique within a single
component, but it does seem to be implied by the rules:
http://tools.ietf.org/html/rfc5245#section-4.1.1.3

> >>> If I can get the selected pair, I can destroy NiceAgent and
> >>> immediately bind to the local base address and send to remote,
> >>> before any intermediate firewall sessions timeout. UDT sends
> >>> frequent keep alives so once the application takes over the
> >>> bindings will be maintained.
> >>
> >> Are you aware that the libnice reliable mode is not TCP, but a
> >> custom TCP-over-UDP implementation (that is compatible with the
> >> one used by Google Talk). And it's definitely not compatible with
> >> UDT, which is a completely different protocol. So you need to be
> >> using libnice on both sides (or libjingle) to send and receive.
> >> Alternatively, you could try to put libnice below UDT, but I'm not
> >> sure how the UDT code works.
> >>
> >> Also, if you end up using a TURN relay, you need to send through
> >> libnice, as the packets are encapsulated.
> > Our current plan for relay is very application specific, so we would
> > not be using TURN anyway. Using libnice reliable is technically
> > possible, just extra work since we already have a UDT
> > implementation. Much simpler for us to use libnice to negotiate the
> > addresses, then stop the agent and give control to UDT.
> 
> It might seem simpler but I expect you'll be having issues in the
> future. As Olivier said, that's not the way it's supposed to work.
> ICE is meant to be interactive, there will be keep alive and binding
> requests sent every once in a while, your implementation would need
> to be able to handle those. If you go through TURN, packets are
> encapsulated, and of course, the reliable mode is TCP over UDP. If
> you're not using TURN (not sure what you're using then, I'm curious
> to know how you can setup a relay with ICE that isn't TURN-based),
> and you're not using the reliable mode, and you make sure that you
> control both peers and that both stop using ICE, and you take care of
> your own keep-alive mechanism, then yeah, it might work. Just know
> that it has never been tested before :)
Our use case is adding support to the Globus GridFTP
server (http://globus.org/toolkit/docs/latest-stable/gridftp/#gridftp)
for direct transfers between two computers behind NATs and stateful
firewalls. The plan for relay when ICE fails is to leverage existing
GridFTP server infrastructure, which already has mechanisms for
authenticating users and sending data between two network streams
instead of from network to disk. Basically we're not using ICE at that
point, just an application specific mechanism.

Globus GridFTP already has a UDT mode that is reasonably well tested and
has some nice advantages over TCP. The new NAT traversal mechanism
doesn't necessarily have to inter-operate with existing UDT servers, but
leveraging the existing code and testing work is a big bonus.

I've done a fair bit of testing with stopping an ICE agent and
replacing it with a UDT connection. The interval between packets to
maintain the firewall/NAT session is at most Ta + UDT setup time, which
seems to be fine with the default 20ms in practice. I think adding
traversal to existing applications protocols is an interesting use case
for ICE, even if it may not have been part of the original design
considerations. ICE is used to discover the endpoints and punch holes
if needed, and then the existing application protocol handles
maintaining the connection.

-Bryce
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 490 bytes
Desc: not available
URL: <http://lists.freedesktop.org/archives/nice/attachments/20130130/4a592c67/attachment.pgp>


More information about the nice mailing list