how to keep eth link down across reboots ?

lejeczek peljasz at yahoo.co.uk
Thu Dec 14 20:09:57 UTC 2023



On 14/12/2023 16:19, lejeczek wrote:
>
>
> On 14/12/2023 12:17, Íñigo Huguet wrote:
>> On Thu, Dec 14, 2023 at 12:01 PM lejeczek 
>> <peljasz at yahoo.co.uk> wrote:
>>> Thanks a lot! That seems more practical.
>>> There is one aspect I did not think of at first - will NM
>>> this way(or any other) be able to detect physical 
>>> link/cable
>>> removal (naturally while device+profile/iface is up/on) ?
>>> BTW, what would a good way to watch such link (for such
>>> cable removal) outside (but perhaps with NM help) of NM?
>>>
>>> many thanks, L.
>>>
>> NM can show you de carrier state of any device, even 
>> unmanaged
>> devices, I think: nmcli -g WIRED-PROPERTIES.CARRIER 
>> device show eth0
>>
>> If you have a profile on the interface and 
>> autoconnect=yes, it will
>> react to carrier up/down events, activating or 
>> deactivating that
>> profile (this won't happen if you configure 
>> ignore-carrier in the
>> .conf files, but for ethernet, by default carrier events are
>> considered).
>>
>> Also, with some python scripting you could listen to the
>> notify::carrier signal that is sent via DBUS. Ask here 
>> for an example
>> if you intend to take this way.
>>
> Thanks, before I drive further that route I have a 
> "finding" I'd like to ask about.
>
> I have an ISP box which connects to more then one PC - 
> thus this all "tampering" here.
> All PCs are physically linked up to that OSP box and I was 
> hoping this:
>
> [Unit]
> Description=Power off/on gateway / GATEWAY physical link
>
> [Service]
> Type=oneshot
> RemainAfterExit=yes
> ExecStartPre=/sbin/ip link set up dev gateway
> ExecStart=/usr/bin/bash -c "/bin/nmcli d set gateway 
> managed yes && /bin/nmcli c u GATEWAY"
> ExecStop=/usr/bin/bash -c "/bin/nmcli c d GATEWAY && 
> /bin/nmcli d set gateway managed no"
> ExecStopPost=/sbin/ip link set down dev gateway
> SuccessExitStatus=0
> #RestartPreventExitStatus=18
> Restart=on-failure
> TimeoutSec=300
> RestartSec=9s
>
> [Install]
> WantedBy=multi-user.target
>
> will take care of the the physical iface port, which.. it 
> does, so it seems, but...
> With that when the service is stopped/started between PCs 
> so only one PC has the service up at a given time, 
> obviously, then...
> other PCs (which previously were the gateway) cannot NAT 
> out nor in through that (new) gateway-PC
>
> Before I go to play with with everything I'm wondering - 
> does a PC which was the gateway & downed the iface link... 
> does something not happening there?
> Looks like a routing to me with "Destination Host 
> Unreachable" but, flushing cache table does not help.
> An example of/on a ex-gateway:
> -> $ ip ro
> default via 10.1.1.254 dev nm-bridge1011 proto static 
> metric 111
> ...
> -> $ ip ro get 212.77.98.9
> 212.77.98.9 dev nm-bridge1011 src 10.1.1.101 uid 0
>     cache
> ... ping it
> From swir.direct (10.1.1.100) icmp_seq=72 Destination Host 
> Unreachable
> ...
> -> $ ip ro add 212.77.98.9 via 10.1.1.254
> ...
> 64 bytes from 212.77.98.9 (212.77.98.9): icmp_seq=2 ttl=59 
> time=15.6 ms
> but sluggish
>
> That is pretty new & weir to me - is that "service" of 
> mine &| NM not doing it all the right way?
>
Right, often many things in one go approach, is just too 
many things - yet another tool got iface profile changed and 
the more tools..
thus cancel/ignore that above, that all works as I hoped.
thanks,


More information about the Networkmanager mailing list