[libnice] Force an external address (srflx) candidate?

Stuart Marshall stuart at seelye.net
Thu Oct 22 03:17:27 UTC 2020


Lorenzo, are you saying that one can do these things with libnice by doing some of the work externally (e.g. determine external address, determine interfaces) and then passing them to the right libnice methods? Does this entail calling libnice methods that are normally considered internal and not intended for external use?

Stuart


From: Lorenzo Miniero <lminiero at gmail.com>
Date: Tuesday, October 20, 2020 at 11:24 PM
To: Stuart Marshall <stuart at seelye.net>
Cc: Olivier Crête <olivier.crete at collabora.com>, Fabrice Bellet <fabrice at bellet.info>, Juan Navarro <juan.navarro at gmx.es>, nice <nice at lists.freedesktop.org>, "I'm at gmail.com" <I'm at gmail.com>
Subject: Re: [libnice] Force an external address (srflx) candidate?

Hi all,

I'm not sure any change is needed in libnice, actually, as it can already take care of those scenarios quite nicely.

If the main aim is avoiding using STUN on cloud services like AWS, where the instance runs with on a private address but it's also uniquely associated with a public one, then all you need to do is advertise the public address in the candidate you trickle or put in the SDP. In fact, those cloud providers use what we call a 1-to-1 NAT mapping: the public port used in the NAT is always the same as the private one, and if the port is open in the firewall it will also automatically forward packets addresses to a public port to the private one. This means that you don't need STUN to open a port and/or find it out: you just need to tell your peer about the public address, and everything will still work (connectivity checks will work just fine).

In Janus we let people configure which public address to use in that case, with the option of keeping the private ones as advertised candidates: which means we either always replace the private IP with the public one, or duplication the candidates we advertise where in one the private address is replaced and in another one it isn't (it's sometimes useful to have the private address advertised too). Some cloud providers expose the public IP of the instance as environment variables, which would make it easier to configure. You can also use libnice to do a STUN request at startup, which we do but for other reasons. At any rate, this means that libnice as it is is perfectly capable of handling these weird use cases. Of course, if this 1-1 NAT behaviour is not happening, and you can expect different ports being used privately and publicly then a STUN request will always be needed for each agent, or checks will just fail if you try to just replace the private address with the public one (exactly because of the different public and private ports, and then being closed in the NAT until opened by a previous STUN request).

Only binding to some interfaces or skipping some (eg, programmatically disable ipv6) is also relatively easy, taking advantage of the libnice feature that allows you to manually choose which interfaces to use for gathering. You simply iterate on available interfaces yourself, prune the ones you don't want, and pass the others to libnice, which will then stick to those. A bit of a manual (and, for interface iteration, system specific) process, but not complex at all and quite flexible: you can check how we do it in Janus for an example.

Hope this helps with the discussion,
Lorenzo




On Wed, 21 Oct 2020, 06:06 Stuart Marshall, <stuart at seelye.net<mailto:stuart at seelye.net>> wrote:
That’s a good point: that the port can be remapped.

I’ve observed NAT routers commonly use the same external port as the internal host computer. Router manufacturers certainly should randomize the external port, but I wonder what the actual numbers are. If libnice guessed that the router would use the same port, and if it remembered its external address, it might be work a good amount of the time. If libnice had some degree of memory, it could even keep track of whether it seems to be behind a router that uses the same port externally as internally.

Another option would be to provide an API so that the host process could ask libnice to do the STUN request ahead of time. For example, as soon as a client app is launched it could ask libnice to allocate a port and make a stun request, knowing that the user is probably going to make a call soon. This is rather app dependent, but in many cases the app could correctly anticipate that port allocation and a stun request will not be wasted.

I like the idea of using UPnP if that can streamline port discovery too.


From: Olivier Crête <olivier.crete at collabora.com<mailto:olivier.crete at collabora.com>>
Date: Tuesday, October 20, 2020 at 12:29 PM
To: Stuart Marshall <stuart at seelye.net<mailto:stuart at seelye.net>>, Fabrice Bellet <fabrice at bellet.info<mailto:fabrice at bellet.info>>, Juan Navarro <juan.navarro at gmx.es<mailto:juan.navarro at gmx.es>>, nice at lists.freedesktop.org<mailto:nice at lists.freedesktop.org> <nice at lists.freedesktop.org<mailto:nice at lists.freedesktop.org>>
Subject: Re: [libnice] Force an external address (srflx) candidate?
Hi,

The thing is that the external address might be the same. But on very new connection, the port will be different. And we have no way what kind of mapping from internal to external port the router will chose. From what I understand, it is even recommended to router manufacturer that they randomize the external port to try to make it harder for attackers to guess the next port.

So even if we remebered the external address, it wouldn't help so much.

What can help, is to build libnice with UPnP support. This way, the external address can be retrieved over the LAN and this is very quick. We could also implement NAT-PMP, which is what Apple routers use. But I don't know if this is common anymore.

Olivier

On Tue, 2020-10-20 at 17:42 +0000, Stuart Marshall wrote:
I like the idea of doing just one STUN request to avoid the many semi-duplicate candidates.

Another interesting thing to think about is that in most cases (99.9% ish) the STUN query is going to return the same result as last time. In most cases the host computer has not moved networks and the external address has not changed.

What if libnice could remember the previous external address and lead with that as a candidate. Libnice could still do one (or more) STUN queries to check if the external address has changed. But starting with the previously known external address could speed up connection a lot.

The challenge is “how to remember the previous external address”. Libnice lives in somebody’s process and on some random host computer. Libnice might be completely shut down in-between uses, even if the process keeps running.

What if libnice remembered the previous external address somewhere in process space. If the process shuts down the knowledge is lost. But if the process keeps running (e.g. a server or a long running browser) then the external address is remembered. Is there a cross-platform place to stash data? Environment variables might work. A persistent background thread might work.

The reason I suggest optimizations like this is customer experience. I see chrome to gstreamer establish connections in less than a second. Gstreamer to gstreamer often takes much more time – easily five seconds plus.

Stuart

From: nice <nice-bounces at lists.freedesktop.org<mailto:nice-bounces at lists.freedesktop.org>>
Date: Monday, October 19, 2020 at 11:00 AM
To: Stuart Marshall <stuart at seelye.net<mailto:stuart at seelye.net>>
Cc: Juan Navarro <juan.navarro at gmx.es<mailto:juan.navarro at gmx.es>>, nice at lists.freedesktop.org<mailto:nice at lists.freedesktop.org> <nice at lists.freedesktop.org<mailto:nice at lists.freedesktop.org>>
Subject: Re: [libnice] Force an external address (srflx) candidate?
Hi Stuart,

On 10/14/20 at 09:22pm, Stuart Marshall wrote:
> In contrast, the ICE candidates emitted by Chrome are stunningly few and precise. I understand that the ICE protocols and the libnice implementation were/are meant to be general case. But they miss obvious efficiencies that can be provided by additional external information. STUN servers facilitate some of that additional information, but introduce a dependency and more latency.
>
> My knowledge of libnice internals is not great, but I kind of wish we could
>
>   1.  Feed some particular IP candidate addresses to it,
>   2.  Tell it to skip a bunch of other candidate generation and testing

I think of some optimisations, that could help to limit the number of
candidates, without weakening the versatility of the ice method overall
(except in *rare* cases where the server running libnice uses source
routing, ie chooses the default route based on the source IP address):

We could use a single server reflexive candidate and relay local
candidate, per stream/component.

Generally, there's no gain to send a stun request from each local
interface, because all packets will reach the same stun server by the
same default route.

The consequence is that we obtain <N> distinct server-reflexive
candidates from <N> distinct source IP addresses. These
server-reflexives candidates are distinct because their IP address will
be the same (this is our public IP address), but the port mapping will
be different. The same applies to turn relay candidates too (including
unnecessary resources reservations on the turn servers BTW).

To avoid that, we could for example:
  1. discard these redundant candidates when we discover them (when
  processing the discovery stun response).
  2. or more radically, just send a single stun and turn discovery request.

In case of 2. the choice of the local interface used as the base address
to send this unique stun/turn discoevry request is normally not
relevant, because the routing table will hopefully make these packets go
out by the same default route again, whatever source interface they come
from.

To summarize, I think that sending a single stun request from a single
network interface during gathering phase to obtain our server-reflexive
address is normally a cheap operation (one RTT when the thr stun server
is available), but what is expensive from libnice point of view is to
deal with many identical reflexive/relay candidates during the
connecting phase, because it creates many possibilities to be tested.
And the more possibilities we have to test, the more time it takes to
complete.

Best wishes,

_______________________________________________

nice mailing list

nice at lists.freedesktop.org<mailto:nice at lists.freedesktop.org>




https://lists.freedesktop.org/mailman/listinfo/nice




--
Olivier Crête
olivier.crete at collabora.com<mailto:olivier.crete at collabora.com>

_______________________________________________
nice mailing list
nice at lists.freedesktop.org<mailto:nice at lists.freedesktop.org>
https://lists.freedesktop.org/mailman/listinfo/nice
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/nice/attachments/20201022/c0f57892/attachment-0001.htm>


More information about the nice mailing list