MBIM Interface in Bridge Mode

Sassy Natan sassyn at gmail.com
Wed Sep 30 09:28:00 UTC 2020


Dear Bjørn,

Thank you so much for your effort and detailed information on how to
shorten out this issue.
I really appreciate this!

Apologies for the late reply.
We had a holiday here in Israel, and with the covid-19 situation (we in the
middle of the second wave) I'm really losing my head.
The  situation here in Israel is really bad compared to the rest of the
world :-(

Again, thank you so much! Your solution opened my mind to new things and
ideas.

However, I tried out your solution, and It doesn't work on my setup.
I'm using a clean ubuntu 18.04 with vpp version 18.07-release without any
luck.

I have compiled an asciinema of the entire process, maybe you can take a
quick look?

 https://asciinema.org/a/05VG1Xhl3kYOydE685wjphoqf
<goog_1496068446>
 https://asciinema.org/a/3sOWj8x5YBWnT5mNUgxiIHmUw
<https://asciinema.org/a/3sOWj8x5YBWnT5mNUgxiIHmUw>


Here is also a quick figure of the topology
[image: image.png]

 You were saying "There are a gazillion other ways to achieve the same" are
you referring to the "two-way default routing"?
Or you mean the entire process.

As I said your solution is very creative - but even though I spent a long
time on this - I didn't manage to come up with an alternative.

Even with your setup, I did try to do a long debug to it.
I used tcpdump on all interfaces (including tracing out the ethernet
address "-e"),  but nothing I can say will provide good feedback here.


Will be very thankful if someone can drop a comment here.

Thank you.
Sassy







On Wed, Sep 23, 2020 at 4:03 PM Bjørn Mork <bjorn at mork.no> wrote:

> FWIW, I just had to try out how much it takes to connect a host owned
> LTE netdev to VPP. That's actually pretty easy.  But you end forwarding
> between the VPP host interface and the LTE netdev in Linux, which I
> guess is something you eventually might like to avoid.  I assume this is
> pretty easy to implement as a VPP plugin or similar.  It's not related
> to the actual functionality here anyway.
>
> My demo setup is a slight modification of the start of the VPP tutorial:
> https://fd.io/docs/vpp/v2005/gettingstarted/progressivevpp/interface.html
>
>
> What I did was - on the host:
>
> 1) Create veth pair according to tutorial instructions:
>
>    ip link add name vpp1out type veth peer name vpp1host
>
>
> 2) Create a new sub-interface for MBIM session ID 3:
>
>    ip link add wwan0.3 link wwan0 type vlan id 3
>
>
> 3) Create a new network namespace and put both these interfaces there:
>
>    ip netns add vpplte
>    ip link set vpp1host netns vpplte
>    ip link set wwan0.3 netns vpplte
>
> 4) Start a shell in the network namespace and execute the remaining host
>    commands there:
>
>    ip netns exec vpplte /bin/bash
>
>
> 5) Set all links up:
>
>    ip link set lo up
>    ip link set vpp1host up
>    ip link set wwan0.3 up
>
>
> 6) Enable proxy-arp and forwarding
>
>    echo 1 > /proc/sys/net/ipv4/conf/vpp1host/proxy_arp
>    echo 1 > /proc/sys/net/ipv4/conf/vpp1host/forwarding
>    echo 1 > /proc/sys/net/ipv4/conf/wwan0.3/forwarding
>
>
> 7) set up two-way default routing
>
>    ip route add default dev wwan0.3
>    ip route add default dev vpp1host table 3
>    ip rule add pref 1000 iif wwan0.3 lookup 3
>
> 8) Disable arp on the veth endpoint
>
>    ip link set dev vpp1host arp off
>
>
> 9) Connect the session, and note the assigned address
>
>    mbimcli -p -d /dev/cdc-wdm0
> --connect=apn=telenor.smart,session-id=3,ip-type=ipv4v6
>
>
> This is the output I got, to be used in the VPP shell below:
>
> [/dev/cdc-wdm0] Successfully connected
>
> [/dev/cdc-wdm0] Connection status:
>               Session ID: '3'
>         Activation state: 'activated'
>         Voice call state: 'none'
>                  IP type: 'ipv4v6'
>             Context type: 'internet'
>            Network error: 'unknown'
>
> [/dev/cdc-wdm0] IPv4 configuration available: 'address, gateway, dns, mtu'
>      IP [0]: '10.169.198.6/30'
>     Gateway: '10.169.198.5'
>     DNS [0]: '193.213.112.4'
>     DNS [1]: '130.67.15.198'
>         MTU: '1500'
>
> [/dev/cdc-wdm0] IPv6 configuration available: 'address, gateway, dns, mtu'
>      IP [0]: '2a02:2121:2c0:e913:392d:3e46:cf98:4ca3/64'
>     Gateway: '2a02:2121:2c0:e913:70a8:2a1c:dc62:2022'
>     DNS [0]: '2001:4600:4:fff::52'
>     DNS [1]: '2001:4600:4:1fff::52'
>         MTU: '1540'
>
>
>
> In the VPP shell:
>
> 10) Create host interface and set link up as instructed by the tutorial
>
>   create host-interface name vpp1out
>   set int state host-vpp1out up
>
>
> 11) Assign the IP address from the mbimcli command above:
>
>
>   set int ip address host-vpp1out 10.169.198.6/30
>
>
> 12) Set a default route (or whatever you want) via some fake gateway
>     address on the other end of that veth pair - the one suggested by
>     the modem is fine:
>
>   ip route add 0/0 via 10.169.198.5
>
>
> 13) ping an address on the other side of the LTE link:
>
>
>     vpp# ping 8.8.8.8
>     116 bytes from 8.8.8.8: icmp_seq=1 ttl=114 time=68.0229 ms
>     116 bytes from 8.8.8.8: icmp_seq=2 ttl=114 time=48.9930 ms
>     116 bytes from 8.8.8.8: icmp_seq=3 ttl=114 time=52.8561 ms
>     116 bytes from 8.8.8.8: icmp_seq=4 ttl=114 time=51.5395 ms
>     116 bytes from 8.8.8.8: icmp_seq=5 ttl=114 time=99.0668 ms
>
>     Statistics: 5 sent, 5 received, 0% packet loss
>
>
>
>
> This was a simple abuse of a dedicated network namespace in the Linux
> host, where it is easy to do two-way default routing without having to
> care about addressing and routes at all.  There are a gazillion other
> ways to achieve the same,
>
> There is no way to know that there is anything special about the
> host-vpp1out interface from the VPP shell.  It looks like any other host
> interface.  But you'll obviously have to do something on the host if you
> are going to run anything fancy over it.  But that's normally not the
> use case for an LTE connection anyway, I guess.
>
>
> vpp# show interface
>               Name               Idx    State  MTU (L3/IP4/IP6/MPLS)
>  Counter          Count
> host-vpp1out                      1      up          9000/0/0/0     rx
> packets                   135
>                                                                     rx
> bytes                   22334
>                                                                     tx
> packets                    61
>                                                                     tx
> bytes                    4874
>                                                                     drops
>                       146
>                                                                     ip4
>                        15
>                                                                     ip6
>                        32
> local0                            0     down          0/0/0/0
> vpp# show ip neighbor
>     Time                       IP                    Flags      Ethernet
>             Interface
>    4095.4097              10.169.198.5                 D
> fe:cd:9c:57:a9:da host-vpp1out
> vpp# show ip fib
> ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto ] epoch:0
> flags:none locks:[adjacency:1, recursive-resolution:1, default-route:1,
> nat-hi:2, ]
> 0.0.0.0/0
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:1 buckets:1 uRPF:14
> to:[60:5760]]
>     [0] [@12]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:12
> to:[0:0] via:[60:5760]]
>           [0] [@5]: ipv4 via 10.169.198.5 host-vpp1out: mtu:9000 next:3
> fecd9c57a9da02fe5a6a76810800
> 0.0.0.0/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:2 buckets:1 uRPF:1 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 10.169.198.4/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:10 buckets:1 uRPF:9 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 10.169.198.5/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:13 buckets:1 uRPF:12 to:[0:0]
> via:[60:5760]]
>     [0] [@5]: ipv4 via 10.169.198.5 host-vpp1out: mtu:9000 next:3
> fecd9c57a9da02fe5a6a76810800
> 10.169.198.4/30
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:9 buckets:1 uRPF:8 to:[0:0]]
>     [0] [@4]: ipv4-glean: host-vpp1out: mtu:9000 next:1
> ffffffffffff02fe5a6a76810806
> 10.169.198.6/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:12 buckets:1 uRPF:13
> to:[15:1440]]
>     [0] [@2]: dpo-receive: 10.169.198.6 on host-vpp1out
> 10.169.198.7/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:11 buckets:1 uRPF:11 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 224.0.0.0/4
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:4 buckets:1 uRPF:3 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 240.0.0.0/4
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:3 buckets:1 uRPF:2 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
> 255.255.255.255/32
>   unicast-ip4-chain
>   [@0]: dpo-load-balance: [proto:ip4 index:5 buckets:1 uRPF:4 to:[0:0]]
>     [0] [@0]: dpo-drop ip4
>
>
>
> And the Linux host view:
>
> root at miraculix:/tmp# ip link
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode
> DEFAULT group default qlen 1000
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 10: vpp1host at if11: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP mode DEFAULT group default qlen 1000
>     link/ether fe:cd:9c:57:a9:da brd ff:ff:ff:ff:ff:ff link-netnsid 0
> 12: wwan0.3 at if3: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP mode DEFAULT group default qlen 1000
>     link/ether 42:0a:0d:ab:b4:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
> root at miraculix:/tmp# ip addr
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1000
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>     inet 127.0.0.1/8 scope host lo
>        valid_lft forever preferred_lft forever
>     inet6 ::1/128 scope host
>        valid_lft forever preferred_lft forever
> 10: vpp1host at if11: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP group default qlen 1000
>     link/ether fe:cd:9c:57:a9:da brd ff:ff:ff:ff:ff:ff link-netnsid 0
>     inet6 fe80::fccd:9cff:fe57:a9da/64 scope link
>        valid_lft forever preferred_lft forever
> 12: wwan0.3 at if3: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc
> noqueue state UP group default qlen 1000
>     link/ether 42:0a:0d:ab:b4:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
>     inet6 2a02:2121:2c0:e913:400a:dff:feab:b4f5/64 scope global dynamic
> mngtmpaddr
>        valid_lft forever preferred_lft forever
>     inet6 fe80::400a:dff:feab:b4f5/64 scope link
>        valid_lft forever preferred_lft forever
> root at miraculix:/tmp# ip route
> default dev wwan0.3 scope link
> root at miraculix:/tmp# ip route show table 3
> default dev vpp1host scope link
> root at miraculix:/tmp# ip rule
> 0:      from all lookup local
> 1000:   from all iif wwan0.3 lookup 3
> 32766:  from all lookup main
> 32767:  from all lookup default
>
>
>
>
>
>
> Bjørn
>


-- 
Regards,

Sassy Natan
972-(0)54-2203702
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/libmbim-devel/attachments/20200930/70490040/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 31421 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/libmbim-devel/attachments/20200930/70490040/attachment-0001.png>


More information about the libmbim-devel mailing list