MTU issues.

Bjørn Mork bjorn at mork.no
Fri Feb 7 09:15:46 UTC 2020


Daniele Palmas <dnlplm at gmail.com> writes:
> Il giorno gio 6 feb 2020 alle ore 12:36 Paul Gildea <gildeap at tcd.ie> ha scritto:
>>
>> +diff -Naur linux-4.14.73/drivers/net/usb/qmi_wwan.c linux-4.14.73-rx_size_fix/drivers/net/usb/qmi_wwan.c
>> +--- linux-4.14.73/drivers/net/usb/qmi_wwan.c 2018-09-29 11:06:07.000000000 +0100
>> ++++ linux-4.14.73-rx_size_fix/drivers/net/usb/qmi_wwan.c 2020-01-31 18:05:07.709008785 +0000
>> +@@ -740,6 +740,14 @@
>> + }
>> + dev->net->netdev_ops = &qmi_wwan_netdev_ops;
>> + dev->net->sysfs_groups[0] = &qmi_wwan_sysfs_attr_group;
>> ++
>> ++ /* LTE Networks don't always respect their own MTU on receive side;
>> ++ * e.g. AT&T pushes 1430 MTU but still allows 1500 byte packets from
>> ++ * far-end network. Make receive buffer large enough to accommodate
>> ++ * them, and add four bytes so MTU does not equal MRU on network
>> ++ * with 1500 MTU otherwise usbnet_change_mtu() will change both.
>>
>> ++     * This is a sufficient max receive buffer as over 1500 MTU,
>>
>> ++     * USB driver issues are not seen.
>>
>> ++ */
>> ++ dev->rx_urb_size = ETH_DATA_LEN + 4;
>> + err:
>> + return status;
>> + }
>>
>
> could it make sense to have rx_urb_size configurable from userspace
> (e.g. sysfs file)?
>
> This is useful also when changing downlink maximum packet size with
> QMI_WDA_SET_DATA_FORMAT and is required for getting high-cat modems
> maximum throughput.

I am not sure I like the idea of yet another magic knob for userspace.
It would be better of we could make this work automatically.

I just had the pleasure of finally trying out the muxing feature.  And I
must say that it is hard enough to configure as it is...  I probably
wouldn't have been able to make it work without the recipes made by you
and others ;-)  But I did notice the issue you are pointing to,
realizing that we hadn't put to much thought into this.

The challenge wrt an automated solution is obvioulsy that the driver
doesn't know anything about QMI_WDA_SET_DATA_FORMAT. That design was not
such a good idea after all IMHO.  It would have been better to make the
driver do QMI proxying, allowing it to intercept the messages it has to
be aware of. But I don't know if that is fixable anymore.

But the driver does have some idea about QMI_WDA_SET_DATA_FORMAT, based
on QMI_WWAN_FLAG_MUX.  If muxing then it could/should assume that we can
receive much larger frames than the MTU.  If we change dev->rx_urb_size
into something reasoable large when muxing is enabled, then that might
do the trick?

And if we want to make it configurable (maybe we still do?) then I
believe we already have the perfect knob:  Why not make the MTU of the
main netdev reflect the muxing dev->rx_urb_size?  This is almost logical
to me.

I.e. something like this:

wwan0     Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:16384  Metric:1
          RX packets:17816 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14372 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1241217 (1.1 MiB)  TX bytes:829676 (810.2 KiB)

qmimux0   Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

qmimux1   Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          UP POINTOPOINT RUNNING NOARP MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)


userspace could then change the mux MRU by simply doing

 ip link set dev wwan0 mtu 32768


Does that make sense?  Note that this wouldn't change the transmitted
size as long as we don't do any aggregation.  It would still be limited
to the largest qmimuxX mtu + ethernet header and qmux header.




Bjørn



More information about the libqmi-devel mailing list