journalctl -fu

I rebooted my laptop today and realised that docker wasnt running… It was running before the reboot and I didn’t upgrade anything related to docker (or I thought)

$ docker ps -a
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
$

Let’s check status and start if needed:

root@athens:/var/log# service docker status
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2020-08-21 08:34:03 BST; 7min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 12015 (code=exited, status=1/FAILURE)
Aug 21 08:34:03 athens systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Aug 21 08:34:03 athens systemd[1]: Stopped Docker Application Container Engine.
Aug 21 08:34:03 athens systemd[1]: docker.service: Start request repeated too quickly.
Aug 21 08:34:03 athens systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 21 08:34:03 athens systemd[1]: Failed to start Docker Application Container Engine.
Aug 21 08:34:42 athens systemd[1]: docker.service: Start request repeated too quickly.
Aug 21 08:34:42 athens systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 21 08:34:42 athens systemd[1]: Failed to start Docker Application Container Engine.
root@athens:/var/log#
root@athens:/var/log#
root@athens:/var/log# service docker start
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xe" for details.
root@athens:/var/log# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2020-08-21 08:41:20 BST; 5s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Process: 35305 ExecStart=/usr/sbin/dockerd -H fd:// $DOCKER_OPTS (code=exited, status=1/FAILURE)
Main PID: 35305 (code=exited, status=1/FAILURE)
Aug 21 08:41:19 athens systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Aug 21 08:41:19 athens systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 21 08:41:19 athens systemd[1]: Failed to start Docker Application Container Engine.
Aug 21 08:41:20 athens systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Aug 21 08:41:20 athens systemd[1]: Stopped Docker Application Container Engine.
Aug 21 08:41:20 athens systemd[1]: docker.service: Start request repeated too quickly.
Aug 21 08:41:20 athens systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 21 08:41:20 athens systemd[1]: Failed to start Docker Application Container Engine.
root@athens:/var/log#

Ok, so not much info… let check the recommend details:

root@athens:/var/log# journalctl -xe
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit docker.socket has begun execution.
░░
░░ The job identifier is 4236.
Aug 21 08:41:20 athens systemd[1]: Listening on Docker Socket for the API.
░░ Subject: A start job for unit docker.socket has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit docker.socket has finished successfully.
░░
░░ The job identifier is 4236.
Aug 21 08:41:20 athens systemd[1]: docker.service: Start request repeated too quickly.
Aug 21 08:41:20 athens systemd[1]: docker.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit docker.service has entered the 'failed' state with result 'exit-code'.
Aug 21 08:41:20 athens systemd[1]: Failed to start Docker Application Container Engine.
░░ Subject: A start job for unit docker.service has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit docker.service has finished with a failure.
░░
░░ The job identifier is 4113 and the job result is failed.
Aug 21 08:41:20 athens systemd[1]: docker.socket: Failed with result 'service-start-limit-hit'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit docker.socket has entered the 'failed' state with result 'service-start-limit-hit'.
root@athens:/var/log# systemctl status docker.service log
Unit log.service could not be found.
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2020-08-21 08:41:20 BST; 1min 2s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Process: 35305 ExecStart=/usr/sbin/dockerd -H fd:// $DOCKER_OPTS (code=exited, status=1/FAILURE)
Main PID: 35305 (code=exited, status=1/FAILURE)
Aug 21 08:41:19 athens systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Aug 21 08:41:19 athens systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 21 08:41:19 athens systemd[1]: Failed to start Docker Application Container Engine.
Aug 21 08:41:20 athens systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Aug 21 08:41:20 athens systemd[1]: Stopped Docker Application Container Engine.
Aug 21 08:41:20 athens systemd[1]: docker.service: Start request repeated too quickly.
Aug 21 08:41:20 athens systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 21 08:41:20 athens systemd[1]: Failed to start Docker Application Container Engine.
root@athens:/var/log#

So “journalctl -xe” and “systemctl status docker.service log” gave nothing useful….

So I searched for “docker.socket: Failed with result ‘service-start-limit-hit'” as it was the message that looked more suspicious. I landed here and tried one command to get more logs that I didnt know: “journaltctl -fu docker”

root@athens:/var/log# journalctl -fu docker
-- Logs begin at Sun 2020-02-02 21:12:23 GMT. --
Aug 21 08:42:41 athens dockerd[35469]: proto: duplicate proto type registered: io.containerd.cgroups.v1.RdmaStat
Aug 21 08:42:41 athens dockerd[35469]: proto: duplicate proto type registered: io.containerd.cgroups.v1.RdmaEntry
Aug 21 08:42:41 athens systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Aug 21 08:42:41 athens systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 21 08:42:41 athens systemd[1]: Failed to start Docker Application Container Engine.
Aug 21 08:42:41 athens systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Aug 21 08:42:41 athens systemd[1]: Stopped Docker Application Container Engine.
Aug 21 08:42:41 athens systemd[1]: docker.service: Start request repeated too quickly.
Aug 21 08:42:41 athens systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 21 08:42:41 athens systemd[1]: Failed to start Docker Application Container Engine.
Aug 21 08:44:32 athens systemd[1]: Starting Docker Application Container Engine…
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.Metrics
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.HugetlbStat
Aug 21 08:44:32 athens dockerd[35538]: unable to configure the Docker daemon with file /etc/docker/daemon.json: invalid character '"' after object key:value pair
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.PidsStat
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.CPUStat
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.CPUUsage
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.Throttle
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.MemoryStat
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.MemoryEntry
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.BlkIOStat
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.BlkIOEntry
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.RdmaStat
Aug 21 08:44:32 athens dockerd[35538]: proto: duplicate proto type registered: io.containerd.cgroups.v1.RdmaEntry
Aug 21 08:44:32 athens systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Aug 21 08:44:32 athens systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 21 08:44:32 athens systemd[1]: Failed to start Docker Application Container Engine.
Aug 21 08:44:32 athens systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
Aug 21 08:44:32 athens systemd[1]: Stopped Docker Application Container Engine.

And now, yes, I could see the docker logs properly… and found the culprit and fixed. I am pretty sure the last time I played with “/etc/docker/daemon.json” I restarted docker and it was fine…

Anyway, I learned a new command “journaltctl -fu SERVICE” to troubleshoot services.

Protobuf/gNMI

As usual, I am following Anton’s blog and now I want to follow his series about Protobuf/gNMI. All merit and hard work is for the author. I am just doing copy/paste. All his code related to this topic is in his github repo:

First time I heard about protobuf was in the context of telemetry from Arista LANZ (44.3.7)

Now it is my chance to get some knowledge about it. Protobuf is a new data encoding type (like JSON) meant for speed mainly. Mayor things, this is a binary protocol. And we are going to use Protobuf to encode YANG/OpenConfig. And the transport protocol is going to be gNMI.

Index

  • 0- Create python env
  • 1- Install protobuf
  • 2- Create and compile protobuf file for the OpenConfig modules openconfig-interfaces.yang.
  • 3- Create python script to write protobuf message based on the model compiled earlier
  • 4- Create python script to read that protobuf message
  • 5- Use gNMI: Create python script to get interface configuration from cEOS

0- Create Python Env

$ mkdir protobuf
$ cd protobuf
$ python3 -m virtualenv ENV
$ source ENV/bin/activate
$ python -m pip install grpcio
$ python -m pip install grpcio-tools
$ python -m pip install pyang

1- Install protobuf

For debian:

$ sudo aptitude install protobuf-compile
$ protoc --version
libprotoc 3.12.3

2- Create and compile protobuf file

This is a quite difficult part. Try to install “pyang” for python and clone openconfig. Keep in mind that I have removed “ro” entries manually below:

$ ls -ltr
total 11
-rw-r--r-- 1 tomas tomas 1240 Aug 19 18:37 README.md
-rw-r--r-- 1 tomas tomas 11358 Aug 19 18:37 LICENSE
drwxr-xr-x 3 tomas tomas 4 Aug 19 18:37 release
drwxr-xr-x 4 tomas tomas 12 Aug 19 18:37 doc
drwxr-xr-x 3 tomas tomas 4 Aug 19 18:37 third_party
$
$ pyang -f tree -p ./release/models/ ./release/models/interfaces/openconfig-interfaces.yang
module: openconfig-interfaces
+--rw interfaces
+--rw interface* [name]
+--rw name -> ../config/name
+--rw config
| +--rw name? string
| +--rw type identityref
| +--rw mtu? uint16
| +--rw loopback-mode? boolean
| +--rw description? string
| +--rw enabled? boolean
+--rw hold-time
| +--rw config
| | +--rw up? uint32
| | +--rw down? uint32
+--rw subinterfaces
+--rw subinterface* [index]
+--rw index -> ../config/index
+--rw config
  +--rw index? uint32
  +--rw description? string
  +--rw enabled? boolean

So this is the YANG model that we want to transform into protobuf.

To be honest, If I have to match that output with the content of the file itself, I dont understant it.

As Anton mentions, you need to check the official protobuf guide and protobuf python guide to create the proto file for the interface YANG model. These two links explain the structure of our new protofile.

In one side, I think I understand the process of converting YANG to Protobug. But I should try something myself to be sure 🙂

The .proto code doesn’t appear properly formatted in my blog so you can see it in the fig above or in github.

Compile:

$ protoc -I=. --python_out=. openconfig_interfaces.proto
$ ls -ltr | grep openconfig_interfaces
-rw-r--r-- 1 tomas tomas 1247 Aug 20 14:01 openconfig_interfaces.proto
-rw-r--r-- 1 tomas tomas 20935 Aug 20 14:03 openconfig_interfaces_pb2.py

3- Create python script to write protobuf

The script has a dict “intend” to be used to populate the proto message. Once it is populated with the info, it is written to a file as byte stream.

$ python create_protobuf.py oc_if.bin
$ file oc_if.bin
oc_if.bin: data

4- Create python script to read protobuf

This is based on the next blog entry of Anton’s series.

The script that read the protobuf message is here.

$ python read_protobuf.py oc_if.bin
{'interfaces': {'interface': [{'name': 'Ethernet1', 'config': {'name': 'Ethernet1', 'type': 0, 'mtu': 1514, 'description': 'ABC', 'enabled': True, 'subinterfaces': {'subinterface': [{'index': 0, 'config': {'index': 0, 'description': 'DEF', 'enabled': True}}]}}}, {'name': 'Ethernet2', 'config': {'name': 'Ethernet2', 'type': 0, 'mtu': 1514, 'description': '123', 'enabled': True, 'subinterfaces': {'subinterface': [{'index': 0, 'config': {'index': 0, 'description': '456', 'enabled': True}}]}}}]}}
$

5- Use gNMI with cEOS

This part is based in the third blog from Anton.

The challenge here is how he found out what files to use.

$ ls -ltr gnmi/proto/gnmi/
total 62
-rw-r--r-- 1 tomas tomas 21907 Aug 20 15:10 gnmi.proto
-rw-r--r-- 1 tomas tomas 125222 Aug 20 15:10 gnmi.pb.go
-rw-r--r-- 1 tomas tomas 76293 Aug 20 15:10 gnmi_pb2.py
-rw-r--r-- 1 tomas tomas 4864 Aug 20 15:10 gnmi_pb2_grpc.py
$
$ ls -ltr gnmi/proto/gnmi_ext/
total 14
-rw-r--r-- 1 tomas tomas 2690 Aug 20 15:10 gnmi_ext.proto
-rw-r--r-- 1 tomas tomas 19013 Aug 20 15:10 gnmi_ext.pb.go
-rw-r--r-- 1 tomas tomas 10191 Aug 20 15:10 gnmi_ext_pb2.py
-rw-r--r-- 1 tomas tomas 83 Aug 20 15:10 gnmi_ext_pb2_grpc.py
$

I can see the blog and github doesnt match and I can’t really follow. Based on that, I have created an script to get the interface config from one cEOS switch using gNMI interface:

$ cat gnmi_get_if_config.py 
#!/usr/bin/env python

# Modules
import grpc
from bin.gnmi_pb2_grpc import *
from bin.gnmi_pb2 import *
import json
import pprint

# Own modules
from bin.PathGenerator import gnmi_path_generator

# Variables
path = {'inventory': 'inventory.json'}
info_to_collect = ['openconfig-interfaces:interfaces']


# User-defined functions
def json_to_dict(path):
    with open(path, 'r') as f:
        return json.loads(f.read())


# Body
if __name__ == '__main__':
    inventory = json_to_dict(path['inventory'])

    for td_entry in inventory['devices']:
        metadata = [('username', td_entry['username']), ('password', td_entry['password'])]

        channel = grpc.insecure_channel(f'{td_entry["ip_address"]}:{td_entry["port"]}', metadata)
        grpc.channel_ready_future(channel).result(timeout=5)

        stub = gNMIStub(channel)

        for itc_entry in info_to_collect:
            print(f'Getting data for {itc_entry} from {td_entry["hostname"]} over gNMI...\n')

            intent_path = gnmi_path_generator(itc_entry)
            print("gnmi_path:\n")
            print(intent_path)
            gnmi_message_request = GetRequest(path=[intent_path], type=0, encoding=4)
            gnmi_message_response = stub.Get(gnmi_message_request, metadata=metadata)
            # we get the outout of gnmi_response that is json as string of bytes
            x = gnmi_message_response.notification[0].update[0].val.json_ietf_val
            # decode the string of bytes as string and then transform to pure json
            y = json.loads(x.decode('utf-8'))
            #import ipdb; ipdb.set_trace()
            # print nicely json
            pprint.pprint(y)

This is my cEOS config:

r01#show management api gnmi
Enabled: Yes
Server: running on port 3333, in default VRF
SSL Profile: none
QoS DSCP: none
r01#
r01#
r01#show version
cEOSLab
Hardware version:
Serial number:
Hardware MAC address: 0242.ac8d.adef
System MAC address: 0242.ac8d.adef
Software image version: 4.23.3M
Architecture: i686
Internal build version: 4.23.3M-16431779.4233M
Internal build ID: afb8ec89-73bd-4410-b090-f000f70505bb
cEOS tools version: 1.1
Uptime: 6 weeks, 1 days, 3 hours and 13 minutes
Total memory: 8124244 kB
Free memory: 1923748 kB
r01#
r01#
r01#show ip interface brief
Address
Interface IP Address Status Protocol MTU Owner

Ethernet1 10.0.12.1/30 up up 1500
Ethernet2 10.0.13.1/30 up up 1500
Loopback1 10.0.0.1/32 up up 65535
Loopback2 192.168.0.1/32 up up 65535
Vlan100 1.1.1.1/24 up up 1500
r01#

And it seems to work:

$ python gnmi_get_if_config.py
Getting data for openconfig-interfaces:interfaces from r01 over gNMI…
gnmi_path:
origin: "openconfig-interfaces"
elem {
name: "interfaces"
}
{'openconfig-interfaces:interface': [{'config': {'arista-intf-augments:load-interval': 300,
'description': '',
'enabled': True,
'loopback-mode': False,
'mtu': 0,
'name': 'Ethernet2',
'openconfig-vlan:tpid': 'openconfig-vlan-types:TPID_0X8100',
'type': 'iana-if-type:ethernetCsmacd'},

Summary

It has been be interesting to play with ProtoBug and gNMI but I have just grasped the surface.

Notes

My test env is here.

Other Info

ripe78 cisco telemetry.

cisco live 2019 intro to gRPC

gRPC and GPB for network engineers here.

SR and TI-LFA

Segment Routing (SR) and Topology Independent Loop Free Alternates (TI-LFA)

Intro

As part of having a MPLS SR lab, I wanted to test FRR (Fast Rerouting) solutions. Arista provides support for FRR TI-LFA based on this link. Unfortunately, if you are not a customer you can’t see that 🙁

But there are other links where you can read about TI-LFA. The two from juniper confuses me when calculating P/Q groups in pre-converge time…

https://blogs.juniper.net/en-us/industry-solutions-and-trends/segment-routing-sr-and-topology-independent-loop-free-alternates-ti-lfa

https://storage.googleapis.com/site-media-prod/meetings/NANOG79/2196/20200530_Bonica_The_Evolution_Of_v1.pdf

The documents above explain the evolution from Loop Free Alternates (LFA) to Remote LFA (RLFA) and finally to TI-LFA.

TI-LFA overcomes the limitations of RLFA using SR paths as repair tunnels.

As well, I have tried to read IETF draft and I didn’t understand things better 🙁

And I doubt I am going to improve it here 🙂

As well, Cisco has good presentations (longer and denser) about SR and TI-LFA.

https://www.ciscolive.com/c/dam/r/ciscolive/us/docs/2016/pdf/BRKRST-3020.pdf

https://www.segment-routing.net/tutorials/2016-09-27-topology-independent-lfa-ti-lfa/

Juniper docs mention always “pre-convergence” but Cisco uses “post-convergence”. I think “post” it is more clear.

EOS TI-LFA Limitations

  • Backup paths are not computed for prefix segments that do not have a host mask (/32 for v4 and /128 for v6).
  • When TI-LFA is configured, the number of anycast segments generated by a node cannot exceed 10.
  • Computing TI-LFA backup paths for proxy node segments is not supported.
  • Backup paths are not computed for node segments corresponding to multi-homed prefixes. The multi-homing could be the result of them being anycast node segments, loopback interfaces on different routers advertising SIDs for the same prefix, node segments leaked between levels and thus being seen as originated from multiple L1-L2 routers.
  • Backup paths are only computed for segments that are non-ECMP.
  • Only IS-IS interfaces that are using the point-to-point network type are eligible for protection.
  • The backup paths are only computed with respect to link/node failure constraints. SRLG constraint is not yet supported.
  • Link/node protection only supported in the default VRF owing to the lack of non-default VRF support for IS-IS segment-routing.
  • Backup paths are computed in the same IS-IS level topology as the primary path.
  • Even with IS-IS GR configured, ASU2, SSO, agent restart are not hitless events for IS-IS SR LFIB routes or tunnels being
    protected by backup paths.

LAB

Based on this, I built a lab using 4.24.1.1F 64 bits on EVE-NG. All links have default ISIS cost of 10 (loopbacks are 1) and we have TI-LFA node-protection enabled globally.

Fig1. SR TI-LFA Lab

The config are quite simple. This is l1r9. The only change is the IP addressing. The links in the diagram show the third octet of the link address range.

!
service routing protocols model multi-agent
!
hostname l1r9
!
spanning-tree mode mstp
!
aaa authorization exec default local
!
no aaa root
!
vrf instance MGMT
!
interface Ethernet1
no switchport
ip address 10.0.10.2/30
isis enable CORE
isis network point-to-point
!
interface Ethernet2
no switchport
ip address 10.0.11.2/30
isis enable CORE
isis network point-to-point
!
interface Ethernet3
no switchport
ip address 10.0.12.1/30
isis enable CORE
isis network point-to-point
!
interface Ethernet4
no switchport
ip address 10.0.13.1/30
isis enable CORE
isis network point-to-point
!
interface Loopback1
description CORE Loopback
ip address 10.0.0.9/32
node-segment ipv4 index 9
isis enable CORE
isis metric 1
!
interface Management1
vrf MGMT
ip address 192.168.249.18/24
!
ip routing
ip routing vrf MGMT
!
ip route vrf MGMT 0.0.0.0/0 192.168.249.1
!
mpls ip
!
mpls label range isis-sr 800000 65536
!
router isis CORE
net 49.0000.0001.0010.0000.0000.0009.00
is-type level-2
log-adjacency-changes
timers local-convergence-delay protected-prefixes
set-overload-bit on-startup wait-for-bgp
!
address-family ipv4 unicast
bfd all-interfaces
fast-reroute ti-lfa mode node-protection
!
segment-routing mpls
router-id 10.0.0.9
no shutdown
adjacency-segment allocation sr-peers backup-eligible
!
management api http-commands
protocol unix-socket
no shutdown
!
vrf MGMT
no shutdown
!

Using this script (using nornir/napalm), I gather the output of all these commands from all routers:

"show isis segment-routing prefix-segments" -> shows if protection is enabled for these segments

"show isis segment-routing adjacency-segments" -> shows is protection is enabled for these segments

"show isis interface" -> shows state of protection configured

"show isis ti-lfa path" -> shows the repair path with the list of all the system IDs from the P-node to the Q-node for every destination/constraint tuple. You will see that even though node protection is configured a link protecting LFA is computed too. This is to fallback to link protecting LFAs whenever the node protecting LFA becomes unavailable.

"show isis ti-lfa tunnel" -> The TI-LFA repair tunnels are just internal constructs that are shared by multiple LFIB routes that compute similar repair paths. This command displays TI-LFA repair tunnels with the primary and backup via information.

"show isis segment-routing tunnel" -> command displays all the IS-IS SR tunnels. The field ‘ TI-LFA tunnel index ’ shows the index of the TI-LFA tunnel protecting the SR tunnel. The same TI-LFA tunnel that protects the LFIB route also protects the corresponding IS-IS SR tunnel.

"show tunnel fib" -> displays tunnels programmed in the tunnel FIB also includes the TI-LFA tunnels along with protected IS-IS SR tunnels.

"show mpls lfib route" -> displays the backup information along with the primary vias for all node/adjacency segments that have TI-LFA backup paths computed.

"show ip route" -> When services like LDP pseudowires, BGP LU, L2 EVPN or L3 MPLS VPN use IS-IS SR tunnels as an underlay, they are automatically protected by TI-LFA tunnels that protect the IS-IS SR tunnels. The ‘show ip route’ command displays the hierarchy of the overlay-underlay-TI-LFA tunnels like below.

This is the output of l1r3 in the initial state (no failures):

/////////////////////////////////////////////////////////////////////////
///                               Device: l1r3                         //      /////////////////////////////////////////////////////////////////////////

command = show isis segment-routing prefix-segments


System ID: 0000.0000.0003			Instance: 'CORE'
SR supported Data-plane: MPLS			SR Router ID: 10.0.0.3

Node: 11     Proxy-Node: 0      Prefix: 0       Total Segments: 11

Flag Descriptions: R: Re-advertised, N: Node Segment, P: no-PHP
                   E: Explicit-NULL, V: Value, L: Local
Segment status codes: * - Self originated Prefix, L1 - level 1, L2 - level 2
  Prefix                      SID Type       Flags                   System ID       Level Protection
  ------------------------- ----- ---------- ----------------------- --------------- ----- ----------
  10.0.0.1/32                   1 Node       R:0 N:1 P:0 E:0 V:0 L:0 0000.0000.0001  L2    node      
  10.0.0.2/32                   2 Node       R:0 N:1 P:0 E:0 V:0 L:0 0000.0000.0002  L2    node      
* 10.0.0.3/32                   3 Node       R:0 N:1 P:0 E:0 V:0 L:0 0000.0000.0003  L2    unprotected
  10.0.0.4/32                   4 Node       R:0 N:1 P:0 E:0 V:0 L:0 0000.0000.0004  L2    node      
  10.0.0.5/32                   5 Node       R:0 N:1 P:0 E:0 V:0 L:0 0000.0000.0005  L2    node      
  10.0.0.6/32                   6 Node       R:0 N:1 P:0 E:0 V:0 L:0 0000.0000.0006  L2    node      
  10.0.0.7/32                   7 Node       R:0 N:1 P:0 E:0 V:0 L:0 0000.0000.0007  L2    node      
  10.0.0.8/32                   8 Node       R:0 N:1 P:0 E:0 V:0 L:0 0000.0000.0008  L2    node      
  10.0.0.9/32                   9 Node       R:0 N:1 P:0 E:0 V:0 L:0 0000.0000.0009  L2    node      
  10.0.0.10/32                 10 Node       R:0 N:1 P:0 E:0 V:0 L:0 0000.0000.0010  L2    node      
  10.0.0.11/32                 11 Node       R:0 N:1 P:0 E:0 V:0 L:0 0000.0000.0011  L2    node      

================================================================================

command = show isis segment-routing adjacency-segments


System ID: l1r3			Instance: CORE
SR supported Data-plane: MPLS			SR Router ID: 10.0.0.3
Adj-SID allocation mode: SR-adjacencies
Adj-SID allocation pool: Base: 100000     Size: 16384
Adjacency Segment Count: 4
Flag Descriptions: F: Ipv6 address family, B: Backup, V: Value
                   L: Local, S: Set

Segment Status codes: L1 - Level-1 adjacency, L2 - Level-2 adjacency, P2P - Point-to-Point adjacency, LAN - Broadcast adjacency

Locally Originated Adjacency Segments
Adj IP Address  Local Intf     SID   SID Source                 Flags     Type  
--------------- ----------- ------- ------------ --------------------- -------- 
      10.0.1.1         Et1  100000      Dynamic   F:0 B:1 V:1 L:1 S:0   P2P L2  
      10.0.2.1         Et2  100001      Dynamic   F:0 B:1 V:1 L:1 S:0   P2P L2  
      10.0.5.2         Et4  100002      Dynamic   F:0 B:1 V:1 L:1 S:0   P2P L2  
      10.0.3.2         Et3  100003      Dynamic   F:0 B:1 V:1 L:1 S:0   P2P L2  

Protection 
---------- 
      node 
      node 
      node 
      node 


================================================================================

command = show isis interface


IS-IS Instance: CORE VRF: default

  Interface Loopback1:
    Index: 12 SNPA: 0:0:0:0:0:0
    MTU: 65532 Type: loopback
    Area Proxy Boundary is Disabled
    Node segment Index IPv4: 3
    BFD IPv4 is Enabled
    BFD IPv6 is Disabled
    Hello Padding is Enabled
    Level 2:
      Metric: 1 (Passive Interface)
      Authentication mode: None
      TI-LFA protection is disabled for IPv4
      TI-LFA protection is disabled for IPv6
  Interface Ethernet1:
    Index: 13 SNPA: P2P
    MTU: 1497 Type: point-to-point
    Area Proxy Boundary is Disabled
    BFD IPv4 is Enabled
    BFD IPv6 is Disabled
    Hello Padding is Enabled
    Level 2:
      Metric: 10, Number of adjacencies: 1
      Link-ID: 0D
      Authentication mode: None
      TI-LFA node protection is enabled for the following IPv4 segments: node segments, adjacency segments
      TI-LFA protection is disabled for IPv6
  Interface Ethernet2:
    Index: 14 SNPA: P2P
    MTU: 1497 Type: point-to-point
    Area Proxy Boundary is Disabled
    BFD IPv4 is Enabled
    BFD IPv6 is Disabled
    Hello Padding is Enabled
    Level 2:
      Metric: 10, Number of adjacencies: 1
      Link-ID: 0E
      Authentication mode: None
      TI-LFA node protection is enabled for the following IPv4 segments: node segments, adjacency segments
      TI-LFA protection is disabled for IPv6
  Interface Ethernet3:
    Index: 15 SNPA: P2P
    MTU: 1497 Type: point-to-point
    Area Proxy Boundary is Disabled
    BFD IPv4 is Enabled
    BFD IPv6 is Disabled
    Hello Padding is Enabled
    Level 2:
      Metric: 10, Number of adjacencies: 1
      Link-ID: 0F
      Authentication mode: None
      TI-LFA node protection is enabled for the following IPv4 segments: node segments, adjacency segments
      TI-LFA protection is disabled for IPv6
  Interface Ethernet4:
    Index: 16 SNPA: P2P
    MTU: 1497 Type: point-to-point
    Area Proxy Boundary is Disabled
    BFD IPv4 is Enabled
    BFD IPv6 is Disabled
    Hello Padding is Enabled
    Level 2:
      Metric: 10, Number of adjacencies: 1
      Link-ID: 10
      Authentication mode: None
      TI-LFA node protection is enabled for the following IPv4 segments: node segments, adjacency segments
      TI-LFA protection is disabled for IPv6

================================================================================

command = show isis ti-lfa path

TI-LFA paths for IPv4 address family
   Topo-id: Level-2
   Destination       Constraint                     Path           
----------------- --------------------------------- -------------- 
   l1r2              exclude node 0000.0000.0002    Path not found 
                     exclude Ethernet2              l1r6           
   l1r8              exclude Ethernet4              l1r4           
                     exclude node 0000.0000.0007    l1r4           
   l1r9              exclude Ethernet4              l1r4           
                     exclude node 0000.0000.0007    l1r4           
   l1r11             exclude Ethernet4              l1r4           
                     exclude node 0000.0000.0007    l1r4           
   l1r10             exclude Ethernet3              l1r7           
                     exclude node 0000.0000.0004    l1r7           
   l1r1              exclude node 0000.0000.0001    Path not found 
                     exclude Ethernet1              Path not found 
   l1r6              exclude Ethernet4              l1r2           
                     exclude node 0000.0000.0007    l1r2           
   l1r7              exclude node 0000.0000.0007    Path not found 
                     exclude Ethernet4              l1r10          
   l1r4              exclude Ethernet3              l1r9           
                     exclude node 0000.0000.0004    Path not found 
   l1r5              exclude Ethernet2              l1r7           
                     exclude node 0000.0000.0002    l1r7           


================================================================================

command = show isis ti-lfa tunnel

Tunnel Index 2
   via 10.0.5.2, 'Ethernet4'
      label stack 3
   backup via 10.0.3.2, 'Ethernet3'
      label stack 3
Tunnel Index 4
   via 10.0.3.2, 'Ethernet3'
      label stack 3
   backup via 10.0.5.2, 'Ethernet4'
      label stack 3
Tunnel Index 6
   via 10.0.3.2, 'Ethernet3'
      label stack 3
   backup via 10.0.5.2, 'Ethernet4'
      label stack 800009 800004
Tunnel Index 7
   via 10.0.5.2, 'Ethernet4'
      label stack 3
   backup via 10.0.3.2, 'Ethernet3'
      label stack 800010 800007
Tunnel Index 8
   via 10.0.2.1, 'Ethernet2'
      label stack 3
   backup via 10.0.5.2, 'Ethernet4'
      label stack 800006 800002
Tunnel Index 9
   via 10.0.5.2, 'Ethernet4'
      label stack 3
   backup via 10.0.2.1, 'Ethernet2'
      label stack 3
Tunnel Index 10
   via 10.0.2.1, 'Ethernet2'
      label stack 3
   backup via 10.0.5.2, 'Ethernet4'
      label stack 3

================================================================================

command = show isis segment-routing tunnel

 Index    Endpoint         Nexthop      Interface     Labels       TI-LFA       
                                                                   tunnel index 
-------- --------------- ------------ ------------- -------------- ------------ 
 1        10.0.0.1/32      10.0.1.1     Ethernet1     [ 3 ]        -            
 2        10.0.0.2/32      10.0.2.1     Ethernet2     [ 3 ]        8            
 3        10.0.0.7/32      10.0.5.2     Ethernet4     [ 3 ]        7            
 4        10.0.0.4/32      10.0.3.2     Ethernet3     [ 3 ]        6            
 5        10.0.0.9/32      10.0.5.2     Ethernet4     [ 800009 ]   2            
 6        10.0.0.10/32     10.0.3.2     Ethernet3     [ 800010 ]   4            
 7        10.0.0.11/32     10.0.5.2     Ethernet4     [ 800011 ]   2            
 8        10.0.0.8/32      10.0.5.2     Ethernet4     [ 800008 ]   2            
 9        10.0.0.6/32      10.0.5.2     Ethernet4     [ 800006 ]   9            
 10       10.0.0.5/32      10.0.2.1     Ethernet2     [ 800005 ]   10           


================================================================================

command = show tunnel fib


Type 'IS-IS SR', index 1, endpoint 10.0.0.1/32, forwarding None
   via 10.0.1.1, 'Ethernet1' label 3

Type 'IS-IS SR', index 2, endpoint 10.0.0.2/32, forwarding None
   via TI-LFA tunnel index 8 label 3
      via 10.0.2.1, 'Ethernet2' label 3
      backup via 10.0.5.2, 'Ethernet4' label 800006 800002

Type 'IS-IS SR', index 3, endpoint 10.0.0.7/32, forwarding None
   via TI-LFA tunnel index 7 label 3
      via 10.0.5.2, 'Ethernet4' label 3
      backup via 10.0.3.2, 'Ethernet3' label 800010 800007

Type 'IS-IS SR', index 4, endpoint 10.0.0.4/32, forwarding None
   via TI-LFA tunnel index 6 label 3
      via 10.0.3.2, 'Ethernet3' label 3
      backup via 10.0.5.2, 'Ethernet4' label 800009 800004

Type 'IS-IS SR', index 5, endpoint 10.0.0.9/32, forwarding None
   via TI-LFA tunnel index 2 label 800009
      via 10.0.5.2, 'Ethernet4' label 3
      backup via 10.0.3.2, 'Ethernet3' label 3

Type 'IS-IS SR', index 6, endpoint 10.0.0.10/32, forwarding None
   via TI-LFA tunnel index 4 label 800010
      via 10.0.3.2, 'Ethernet3' label 3
      backup via 10.0.5.2, 'Ethernet4' label 3

Type 'IS-IS SR', index 7, endpoint 10.0.0.11/32, forwarding None
   via TI-LFA tunnel index 2 label 800011
      via 10.0.5.2, 'Ethernet4' label 3
      backup via 10.0.3.2, 'Ethernet3' label 3

Type 'IS-IS SR', index 8, endpoint 10.0.0.8/32, forwarding None
   via TI-LFA tunnel index 2 label 800008
      via 10.0.5.2, 'Ethernet4' label 3
      backup via 10.0.3.2, 'Ethernet3' label 3

Type 'IS-IS SR', index 9, endpoint 10.0.0.6/32, forwarding None
   via TI-LFA tunnel index 9 label 800006
      via 10.0.5.2, 'Ethernet4' label 3
      backup via 10.0.2.1, 'Ethernet2' label 3

Type 'IS-IS SR', index 10, endpoint 10.0.0.5/32, forwarding None
   via TI-LFA tunnel index 10 label 800005
      via 10.0.2.1, 'Ethernet2' label 3
      backup via 10.0.5.2, 'Ethernet4' label 3

Type 'TI-LFA', index 2, forwarding None
   via 10.0.5.2, 'Ethernet4' label 3
   backup via 10.0.3.2, 'Ethernet3' label 3

Type 'TI-LFA', index 4, forwarding None
   via 10.0.3.2, 'Ethernet3' label 3
   backup via 10.0.5.2, 'Ethernet4' label 3

Type 'TI-LFA', index 6, forwarding None
   via 10.0.3.2, 'Ethernet3' label 3
   backup via 10.0.5.2, 'Ethernet4' label 800009 800004

Type 'TI-LFA', index 7, forwarding None
   via 10.0.5.2, 'Ethernet4' label 3
   backup via 10.0.3.2, 'Ethernet3' label 800010 800007

Type 'TI-LFA', index 8, forwarding None
   via 10.0.2.1, 'Ethernet2' label 3
   backup via 10.0.5.2, 'Ethernet4' label 800006 800002

Type 'TI-LFA', index 9, forwarding None
   via 10.0.5.2, 'Ethernet4' label 3
   backup via 10.0.2.1, 'Ethernet2' label 3

Type 'TI-LFA', index 10, forwarding None
   via 10.0.2.1, 'Ethernet2' label 3
   backup via 10.0.5.2, 'Ethernet4' label 3

================================================================================

command = show mpls lfib route

MPLS forwarding table (Label [metric] Vias) - 14 routes 
MPLS next-hop resolution allow default route: False
Via Type Codes:
          M - MPLS via, P - Pseudowire via,
          I - IP lookup via, V - VLAN via,
          VA - EVPN VLAN aware via, ES - EVPN ethernet segment via,
          VF - EVPN VLAN flood via, AF - EVPN VLAN aware flood via,
          NG - Nexthop group via
Source Codes:
          G - gRIBI, S - Static MPLS route,
          B2 - BGP L2 EVPN, B3 - BGP L3 VPN,
          R - RSVP, LP - LDP pseudowire,
          L - LDP, M - MLDP,
          IP - IS-IS SR prefix segment, IA - IS-IS SR adjacency segment,
          IL - IS-IS SR segment to LDP, LI - LDP to IS-IS SR segment,
          BL - BGP LU, ST - SR TE policy,
          DE - Debug LFIB

 IA  100000   [1]
                via M, 10.0.1.1, pop
                 payload autoDecide, ttlMode uniform, apply egress-acl
                 interface Ethernet1
 IA  100001   [1]
                via TI-LFA tunnel index 8, pop
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.2.1, Ethernet2, label imp-null(3)
                    backup via 10.0.5.2, Ethernet4, label 800006 800002
 IA  100002   [1]
                via TI-LFA tunnel index 7, pop
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.5.2, Ethernet4, label imp-null(3)
                    backup via 10.0.3.2, Ethernet3, label 800010 800007
 IA  100003   [1]
                via TI-LFA tunnel index 6, pop
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.3.2, Ethernet3, label imp-null(3)
                    backup via 10.0.5.2, Ethernet4, label 800009 800004
 IP  800001   [1], 10.0.0.1/32
                via M, 10.0.1.1, pop
                 payload autoDecide, ttlMode uniform, apply egress-acl
                 interface Ethernet1
 IP  800002   [1], 10.0.0.2/32
                via TI-LFA tunnel index 8, pop
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.2.1, Ethernet2, label imp-null(3)
                    backup via 10.0.5.2, Ethernet4, label 800006 800002
 IP  800004   [1], 10.0.0.4/32
                via TI-LFA tunnel index 6, pop
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.3.2, Ethernet3, label imp-null(3)
                    backup via 10.0.5.2, Ethernet4, label 800009 800004
 IP  800005   [1], 10.0.0.5/32
                via TI-LFA tunnel index 10, swap 800005 
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.2.1, Ethernet2, label imp-null(3)
                    backup via 10.0.5.2, Ethernet4, label imp-null(3)
 IP  800006   [1], 10.0.0.6/32
                via TI-LFA tunnel index 9, swap 800006 
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.5.2, Ethernet4, label imp-null(3)
                    backup via 10.0.2.1, Ethernet2, label imp-null(3)
 IP  800007   [1], 10.0.0.7/32
                via TI-LFA tunnel index 7, pop
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.5.2, Ethernet4, label imp-null(3)
                    backup via 10.0.3.2, Ethernet3, label 800010 800007
 IP  800008   [1], 10.0.0.8/32
                via TI-LFA tunnel index 2, swap 800008 
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.5.2, Ethernet4, label imp-null(3)
                    backup via 10.0.3.2, Ethernet3, label imp-null(3)
 IP  800009   [1], 10.0.0.9/32
                via TI-LFA tunnel index 2, swap 800009 
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.5.2, Ethernet4, label imp-null(3)
                    backup via 10.0.3.2, Ethernet3, label imp-null(3)
 IP  800010   [1], 10.0.0.10/32
                via TI-LFA tunnel index 4, swap 800010 
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.3.2, Ethernet3, label imp-null(3)
                    backup via 10.0.5.2, Ethernet4, label imp-null(3)
 IP  800011   [1], 10.0.0.11/32
                via TI-LFA tunnel index 2, swap 800011 
                 payload autoDecide, ttlMode uniform, apply egress-acl
                    via 10.0.5.2, Ethernet4, label imp-null(3)
                    backup via 10.0.3.2, Ethernet3, label imp-null(3)

================================================================================

command = show ip route


VRF: default
Codes: C - connected, S - static, K - kernel, 
       O - OSPF, IA - OSPF inter area, E1 - OSPF external type 1,
       E2 - OSPF external type 2, N1 - OSPF NSSA external type 1,
       N2 - OSPF NSSA external type2, B - BGP, B I - iBGP, B E - eBGP,
       R - RIP, I L1 - IS-IS level 1, I L2 - IS-IS level 2,
       O3 - OSPFv3, A B - BGP Aggregate, A O - OSPF Summary,
       NG - Nexthop Group Static Route, V - VXLAN Control Service,
       DH - DHCP client installed default route, M - Martian,
       DP - Dynamic Policy Route, L - VRF Leaked,
       RC - Route Cache Route

Gateway of last resort is not set

 I L2     10.0.0.1/32 [115/11] via 10.0.1.1, Ethernet1
 I L2     10.0.0.2/32 [115/11] via 10.0.2.1, Ethernet2
 C        10.0.0.3/32 is directly connected, Loopback1
 I L2     10.0.0.4/32 [115/11] via 10.0.3.2, Ethernet3
 I L2     10.0.0.5/32 [115/21] via 10.0.2.1, Ethernet2
 I L2     10.0.0.6/32 [115/21] via 10.0.5.2, Ethernet4
 I L2     10.0.0.7/32 [115/11] via 10.0.5.2, Ethernet4
 I L2     10.0.0.8/32 [115/31] via 10.0.5.2, Ethernet4
 I L2     10.0.0.9/32 [115/21] via 10.0.5.2, Ethernet4
 I L2     10.0.0.10/32 [115/21] via 10.0.3.2, Ethernet3
 I L2     10.0.0.11/32 [115/31] via 10.0.5.2, Ethernet4
 C        10.0.1.0/30 is directly connected, Ethernet1
 C        10.0.2.0/30 is directly connected, Ethernet2
 C        10.0.3.0/30 is directly connected, Ethernet3
 I L2     10.0.4.0/30 [115/20] via 10.0.2.1, Ethernet2
 C        10.0.5.0/30 is directly connected, Ethernet4
 I L2     10.0.6.0/30 [115/20] via 10.0.3.2, Ethernet3
 I L2     10.0.7.0/30 [115/30] via 10.0.2.1, Ethernet2
                               via 10.0.5.2, Ethernet4
 I L2     10.0.8.0/30 [115/20] via 10.0.5.2, Ethernet4
 I L2     10.0.9.0/30 [115/30] via 10.0.5.2, Ethernet4
 I L2     10.0.10.0/30 [115/20] via 10.0.5.2, Ethernet4
 I L2     10.0.11.0/30 [115/30] via 10.0.5.2, Ethernet4
 I L2     10.0.12.0/30 [115/30] via 10.0.3.2, Ethernet3
                                via 10.0.5.2, Ethernet4
 I L2     10.0.13.0/30 [115/30] via 10.0.5.2, Ethernet4


================================================================================

In l1r3 we can see:

  • show isis segment-routing prefix-segments: all prefix segments are under “node” protection (apart from itself – 10.0.0.3/32)
  • show isis segment-routing adjacency-segments: all adjacent segments are under “node” protection.
  • show isis interface: All isis enabled interfaces (apart from loopback1) have TI-LFA node protection enabled for ipv4.
  • show isis ti-lfa path: Here we can see link and node protection to all possible destinations in our ISIS domain (all P routers in our BGP-Free core). When node protection is not possible, link protection is calculated. The exception is l1r1 because it has only one link into the networks, so if that is lost, there is no backup at all.
  • show isis ti-lfa tunnel: This can be confusing. These are the TI-LFA tunnels, the first two lines refer to the path they are protecting. The last two lines are really the tunnel configuration. Another interesting thing here is the label stack for some backup tunnels (index 6, 7, 8). This a way to avoid a loop. The index is used in the next command.
  • show isis segment-routing tunnel: Here we see the current SR tunnels and the corresponding backup (index that refers to above command). Label [3] is the implicit null label. Paying attention to the endpoint “10.0.0.2/32” (as per fig2 below). We can see the primary path is via eth2. The backup is via tunnel index 8 (via eth4 – l1r7). If you check the path to “10.0.0.2/32 – 800002” from l1r7 (output after fig2) you can see it is pointing back to l1r3 and we would have a loop! For this reason the backup tunnel index 8 in l1r3 has a label stack to avoid this loop (800006 800002). Once l1r7 received this packet and checks the segment labels, it sends the packet to 800006 via eth2 (l1r6) and then l1r6 uses 8000002 to reach finally l1r2 (via l1r5).
Fig2. l1r3: backup tunnel for l1r2
l1r7# show isis segment-routing tunnel
Index Endpoint Nexthop Interface Labels TI-LFA
tunnel index

1 10.0.0.9/32 10.0.10.2 Ethernet3 [ 3 ] 3
2 10.0.0.6/32 10.0.8.1 Ethernet2 [ 3 ] 1
3 10.0.0.3/32 10.0.5.1 Ethernet1 [ 3 ] 2
4 10.0.0.10/32 10.0.10.2 Ethernet3 [ 800010 ] 7
5 10.0.0.11/32 10.0.10.2 Ethernet3 [ 800011 ] 4
6 10.0.0.4/32 10.0.5.1 Ethernet1 [ 800004 ] 11
7 10.0.0.8/32 10.0.8.1 Ethernet2 [ 800008 ] -
- 10.0.10.2 Ethernet3 [ 800008 ] -
8 10.0.0.2/32 10.0.5.1 Ethernet1 [ 800002 ] 9
9 10.0.0.5/32 10.0.8.1 Ethernet2 [ 800005 ] 8
10 10.0.0.1/32 10.0.5.1 Ethernet1 [ 800001 ] 10
l1r7#
l1r7#show mpls lfib route 800006
...
IP 800006 [1], 10.0.0.6/32
via TI-LFA tunnel index 1, pop
payload autoDecide, ttlMode uniform, apply egress-acl
via 10.0.8.1, Ethernet2, label imp-null(3)
backup via 10.0.10.2, Ethernet3, label 800008 800006
l1r7#
l1r7#show mpls lfib route 800002
...
IP 800002 [1], 10.0.0.2/32
via TI-LFA tunnel index 9, swap 800002
payload autoDecide, ttlMode uniform, apply egress-acl
via 10.0.5.1, Ethernet1, label imp-null(3)
backup via 10.0.8.1, Ethernet2, label imp-null(3)
  • show tunnel fib: you can see all “IS-IS SR” and “TI-LFA” tunnels defined. It is like a merge of “show isis segment-routing tunnel” and “show isis ti-lfa tunnel”.
  • show mpls lfib route: You can see the programmed labels and TI-LFA. I’ve got confused when I see “imp-null” and the I see some pop/swap for the same entry…
  • show ip route: nothing really interesting without L3VPNS

Testing

Ok, you need to generate traffic that is labelled to really test TI-LFA and with enough packet rate to see if you are close to the 50ms recovery promissed.

So I have had to make some changes:

  • create a L3VPN CUST-A (evpn) in l1r3 and l1r9, so they are PEs
  • l1r1 and l1r11 are CPE in VRF CUST-A

All other devices have no changes

We need to test with and without TI-LFA enabled. The test I have do is to ping from l1r1 to l1r11 and dropping the link l1r3-l1r7, while l1r3 has enabled/disabled TI-LFA.

Fig3 – Testing Scenario

Routing changes with TI-LFA enabled


BEFORE DROPPING LINK
======

l1r3#show ip route vrf CUST-A

 B I      10.0.13.0/30 [200/0] via 10.0.0.9/32, IS-IS SR tunnel index 5, label 116384
                                  via TI-LFA tunnel index 4, label 800009
                                     via 10.0.5.2, Ethernet4, label imp-null(3)
                                     backup via 10.0.3.2, Ethernet3, label imp-null(3)
 C        192.168.0.3/32 is directly connected, Loopback2
 B I      192.168.0.9/32 [200/0] via 10.0.0.9/32, IS-IS SR tunnel index 5, label 116384
                                    via TI-LFA tunnel index 4, label 800009
                                       via 10.0.5.2, Ethernet4, label imp-null(3)
                                       backup via 10.0.3.2, Ethernet3, label imp-null(3)

AFTER DROPPING LINK
======

l1r3#show ip route vrf CUST-A

 B I      10.0.13.0/30 [200/0] via 10.0.0.9/32, IS-IS SR tunnel index 5, label 116384
                                  via TI-LFA tunnel index 11, label 800009
                                     via 10.0.3.2, Ethernet3, label imp-null(3)
                                     backup via 10.0.2.1, Ethernet2, label 800005
 C        192.168.0.3/32 is directly connected, Loopback2
 B I      192.168.0.9/32 [200/0] via 10.0.0.9/32, IS-IS SR tunnel index 5, label 116384
                                    via TI-LFA tunnel index 11, label 800009
                                       via 10.0.3.2, Ethernet3, label imp-null(3)

Ping results

TI-LFA enabled in L1R3  TEST1
=========================

bash-4.2# ping -f 10.0.13.2
PING 10.0.13.2 (10.0.13.2) 56(84) bytes of data.
..................^C                                                                                                      
--- 10.0.13.2 ping statistics ---
1351 packets transmitted, 1333 received, 1% packet loss, time 21035ms
rtt min/avg/max/mdev = 21.081/348.764/1722.587/487.280 ms, pipe 109, ipg/ewma 15.582/67.643 ms
bash-4.2# 


NO TI-LFA enabled in L1R3  TEST1
=========================

bash-4.2# ping -f 10.0.13.2
PING 10.0.13.2 (10.0.13.2) 56(84) bytes of data.
.............................................E...................................................................................^C            
--- 10.0.13.2 ping statistics ---
2274 packets transmitted, 2172 received, +1 errors, 4% packet loss, time 36147ms
rtt min/avg/max/mdev = 20.965/88.300/542.279/86.227 ms, pipe 34, ipg/ewma 15.903/73.403 ms
bash-4.2# 

Summary Testing

With TI-LFA enabled in l1r3, we have lost 18 packets (around 280ms)

Without TI-LFA in l1r3, we have lost 102 packets (around 1621ms =~ 1.6s)

Keeping in mind this lab is based in VMs (veos) running in another VM (eve-ng) is not bad result.

It seems far from the 50ms, but still shows the improvement of enabling TI-LFA

Docker + Kubernetes

For some time, I wanted to take a look at kubernetes. There is a lot of talking about microservices in the cloud and after attending some meetups, I wasnt sure what was all this about so I signed for kodekloud to learn about it.

So far, I have completed the beginners course for Docker and Kubernetes. To be honest, I think the product is very good value for money.

I have been using docker a bit the last couple of months but still wanted to take a bit more info to improve my knowledge.

I was surprised when reading that kubernets pods rely on docker images.

Docker Notes

Docker commands

docker run -it xxx (interactive+pseudoterminal)
docker run -d xxx (detach)
docker attach ID (attach)
docker run --name TEST xxx (provide name to container)
docker run -p 80:5000 xxx (maps host port 80 to container port 5000)

docker run -v /opt/datadir:/var/lib/mysql mysql (map a host folder to container folder for data persistence)

docker run -e APP_COLOR=blue xxx (pass env var to the container)
docker inspect "container"  -> check IP, env vars, etc
docker logs "container"

docker build . -t account_name/app_name
docker login
docker push account_name/app_name

docker -H=remote-docker-engine:2375 xxx

cgroups: restrict resources in container
  docker run --cpus=.5  xxx (no more than 50% CPU)
  docker run --memory=100m xxx (no more than 100M memory)

Docker File

----
FROM Ubuntu
ENTRYPOINT ["sleep"]
CMD ["5"]        --> if you dont pass any value in "docker run .." it uses by default 5.
----

Docker Compose

$ cat docker-compose.yml
version: "3"
services:
 db:
  image: postgres
  environment:
    - POSTGRES_PASSWORD=mysecretpassword
 wordpress:
  image: wordpress
  links:
    - db
  ports:
    - 8085:80


verify file: $ docker-compose config

Docker Volumes

docker volume create NAME  --> create /var/lib/docker/volumes/NAME

docker run -v NAME:/var/lib/mysql mysql  (docker volume)
or
docker run -v PATH:/var/lib/mysql mysql  (local folder)
or
docker run --mount type=bind,source=/data/mysql,target=/var/lib/mysql mysql

Docker Networking

networks: --network=xxx
 bridge (default)
 none   isolation
 host (only communication with other containers)   

docker network create --driver bridge --subnet x.x.x.x/x NAME
docker network ls
               inspect

Docker Swarm

I didnt use this, just had the theory. This is for clustering docker hosts: manager, workers.

   manager: docker swarm init
   workers: docker swarm join --token xxx
   manager: docker service create --replicas=3 my-web-server

Kubernetes Notes

container + orchestration
 (docker)    (kubernetes)

node: virtual or physical, kube is installed here
cluster: set of nodes
master: node that manage clusters

kube components:
  api,
  etcd (key-value store),
  scheduler (distribute load),
  kubelet (agent),
  controller (brain: check status),
  container runtime (sw to run containers: docker)

master: api, etcd, controller, scheduler,
   $ kubectl cluster-info
           get nodes -o wide (extra info)
node: kubelet, container

Setup Kubernetes with minikube

Setting up kubernetes doesnt look like an easy task so there are tools to do that like microk8s, kubeadm (my laptop needs more RAM, can’t handle 1master+2nodes) and minikube.

minikube needs: virtualbox(couldnt make it work with kvm2…) and kubectl

Install kubectl

I assume virtualbox is already installed

$ curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"

$ chmod +x ./kubectl
$ sudo mv ./kubectl /usr/local/bin/kubectl
$ kubectl version --client

Install minikube

$ grep -E --color 'vmx|svm' /proc/cpuinfo   // verify your CPU support 
                                               virtualization
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
>   && chmod +x minikube
$ sudo install minikube /usr/local/bin/

Start/Status minikube

$ minikube start --driver=virtualbox  --> it takes time!!!! 2cpu + 2GB ram !!!!
😄  minikube v1.12.3 on Debian bullseye/sid
✨  Using the virtualbox driver based on user configuration
💿  Downloading VM boot image ...
    > minikube-v1.12.2.iso.sha256: 65 B / 65 B [-------------] 100.00% ? p/s 0s
    > minikube-v1.12.2.iso: 173.73 MiB / 173.73 MiB [] 100.00% 6.97 MiB p/s 25s
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.18.3 preload ...
    > preloaded-images-k8s-v5-v1.18.3-docker-overlay2-amd64.tar.lz4: 510.91 MiB
🔥  Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.18.3 on Docker 19.03.12 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
🏄  Done! kubectl is now configured to use "minikube"

$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   5m52s   v1.18.3

$ minikube stop  // stop the virtualbox VM to free up resources once you finish

Basic Test

$ kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
deployment.apps/hello-minikube created

$ kubectl get deployments
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
hello-minikube   1/1     1            1           22s

$ kubectl expose deployment hello-minikube --type=NodePort --port=8080
service/hello-minikube exposed

$ minikube service hello-minikube --url
http://192.168.99.100:30020

$ kubectl delete services hello-minikube
$ kubectl delete deployment hello-minikube
$ kubectl get pods

Pods

Based on documentation:

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.
A Pod is a group of one or more containers, with shared storage/network resources, and a specification for how to run the containers. A Pod's contents are always co-located and co-scheduled, and run in a shared context. A Pod models an application-specific "logical host": it contains one or more application containers which are relatively tightly coupled. In non-cloud contexts, applications executed on the same physical or virtual machine are analogous to cloud applications executed on the same logical host.
$ kubectl run nginx --image=nginx
$ kubectl describe pod nginx
$ kubectl get pods -o wi
$ kubectl delete pod nginx

Pods – Yaml

Pod yaml structure:

pod-definition.yml:
---
apiVersion: v1
kind: (type of object: Pod, Service, ReplicatSet, Deployment)
metadata: (only valid k-v)
 name: myapp-pod
 labels: (any kind of k-v)
   app: myapp
   type: front-end
spec:
  containers:
   - name: nginx-container
     image: nginx

Example:

$ cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
    type: frontend
spec:
  containers:
  - name: nginx
    image: nginx

$ kubectl apply -f pod.yaml 
$ kubectl get pods

Replica-Set

Based on documentation:

A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
> cat replicaset-definition.yml
---
 apiVersion: apps/v1
 kind: ReplicaSet
 metadata:
   name: myapp-replicaset
   labels:
     app: myapp
     type: front-end
 spec:
   template:
     metadata:      -------
       name: nginx         |
       labels:             |
         app: nginx        |
         type: frontend    |-> POD definition
     spec:                 |
       containers:         |
       - name: nginx       |
         image: nginx  -----
   replicas: 3
   selector:       <-- main difference from replication-controller
     matchLabels:
       type: front-end
       
> kubectl create -f replicaset-definition.yml
> kubectl get replicaset
> kubectl get pods

> kubectl delete replicaset myapp-replicaset

How to scale via replica-set

> kubectl replace -f replicaset-definition.yml  (first update file to replicas: 6)

> kubectl scale --replicas=6 -f replicaset-definition.yml  // no need to modify file

> kubectl scale --replicas=6 replicaset myapp-replicaset   // no need to modify file

> kubectl edit replicaset myapp-replicaset (NAME of the replicaset!!!)

> kubectl describe replicaset myapp-replicaset

> kubectl get rs new-replica-set -o yaml > new-replica-set.yaml ==> returns the rs definition in yaml!

Deployments

Based on documentation:

Deployment provides declarative updates for Pods ReplicaSets.
You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.

Example:

cat deployment-definition.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
    spec:
      containers:
      - name: nginx-controller
        image: nginx
  replicas: 3
  selector:
    matchLabels:
      type: front-end


> kubectl create -f deployment-definition.yml
> kubectl get deployments
> kubectl get replicaset
> kubectl get pods

> kubectl get all

Update/Rollback

From documentation.

By default, it follows a “rolling update”: destroy one, create new one. So this doesnt cause an outage

$ kubectl create -f deployment.yml --record
$ kubectl rollout status deployment/myapp-deployment
$ kubectl rollout history deployment/myapp-deployment
$ kubectl rollout undo deployment/myapp-deployment ==> rollback!!!

Networking

Not handled natively by kubernetes, you need another tool like calico, weave, etc. More info here. This has not been covered in details yet. It looks complex (a network engineer talking…)

Services

Based on documentation:

An abstract way to expose an application running on a set of Pods as a network service.
With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
types:
   NodePort: like docker port-mapping
   ClusterIP:
   LoadBalancer

Examples:

nodeport
--------
service: like a virtual server
  targetport - in the pod: 80
  service - 80
  nodeport: 30080 (in the node)
  
service-definition.yml
apiVersion: v1
kind: Service
metadata:
  name: mypapp-service
spec:
  type: NodePort
  ports:
  - targetPort: 80
    port: 80
    nodePort: 30080  (range: 30000-32767)
  selector:
    app: myapp
    type: front-end

> kubectl create -f service-definition.yml
> kubectl get services
> minikube service mypapp-service


clusterip: 
---------
service-definition.yml
apiVersion: v1
kind: Service
metadata:
  name: back-end
spec:
  type: ClusterIP
  ports:
  - targetPort: 80
    port: 80
  selector:
    app: myapp
    type: back-end


loadbalance: gcp, aws, azure only !!!!
-----------
service-definition.yml
apiVersion: v1
kind: Service
metadata:
  name: back-end
spec:
  type: LoadBalancer
  ports:
  - targetPort: 80
    port: 80
    nodePort: 30080
  selector:
    app: myapp


> kubectl create -f service-definition.yml
> kubectl get services

Microservices architecture example

Diagram
=======

voting-app     result-app
 (python)       (nodejs)
   |(1)           ^ (4)
   v              |
in-memoryDB       db
 (redis)       (postgresql)
    ^ (2)         ^ (3)
    |             |
    ------- -------
          | |
         worker
          (.net)

1- deploy containers -> deploy PODs (deployment)
2- enable connectivity -> create service clusterIP for redis
                          create service clusterIP for postgres
3- external access -> create service NodePort for voting
                      create service NodePort for result

Code here. Steps:

$ kubectl create -f voting-app-deployment.yml
$ kubectl create -f voting-app-service.yml

$ kubectl create -f redis-deployment.yml
$ kubectl create -f redis-service.yml

$ kubectl create -f postgres-deployment.yml
$ kubectl create -f postgres-service.yml

$ kubectl create -f worker-deployment.yml

$ kubectl create -f result-app-deployment.yml
$ kubectl create -f result-app-service.yml

Verify:

$ minikube service voting-service --url
http://192.168.99.100:30004
$ minikube service result-service --url
http://192.168.99.100:30005

Optical in Networking: 101

This is a very good presentation about optical stuff from NANOG 70 (2017). And I noticed there is an updated version from NANOG 77 (2019). I watched the 2017 (2h) and there is something always bites me: db vs dbm

A bit more info about dB vs dBm: here and here

Close to the end, there are some common questions about optical that he provides answers. I liked the ones about “looking at the lasers can make you blind” and the point that is worth cleaning your fibers. A bit about cleaning here.

cEOS Netconf – Ncclient

I am still trying to play with / understand Openconfig/YANG/Netconf modelling. Initially I tried to use ansible to configure EOS via netconf but I didnt get very far 🙁

I have found an Arista blog to deal with netconf using the python library ncclient.

This is my adapted code. Keep in mind that I think there is a typo/bug in Arista blog in “def irbrpc(..)” as it should return “snetrpc” instead of “irbrpc”. This is the error I had:

Traceback (most recent call last):
File "eos-ncc.py", line 171, in
main()
File "eos-ncc.py", line 168, in main
execrpc(hostip, uname, passw, rpc)
File "eos-ncc.py", line 7, in execrpc
rpcreply = conn.dispatch(to_ele(rpc))
File "xxx/lib/python3.7/site-packages/ncclient/xml_.py", line 126, in to_ele
return x if etree.iselement(x) else etree.fromstring(x.encode('UTF-8'), parser=_get_parser(huge_tree))
AttributeError: 'function' object has no attribute 'encode'

After a couple of prints in “ncclient/xml_.py” I could see “x” was a function but I couldnt understand why. Just by chance I notices the typo in the return.

As well, I couldn’t configure the vxlan interface using XML as per the blog and I had to remove it and add it via a RPC call with CLI commands “intfrpcvxlan_cli”. This is the error I had:

Traceback (most recent call last):
File "eos-ncc.py", line 171, in
main()
File "eos-ncc.py", line 168, in main
execrpc(hostip, uname, passw, rpc)
File "eos-ncc.py", line 7, in execrpc
rpcreply = conn.dispatch(to_ele(rpc))
File "xxx/lib/python3.7/site-packages/ncclient/manager.py", line 236, in execute
huge_tree=self._huge_tree).request(*args, **kwds)
File "xxx/lib/python3.7/site-packages/ncclient/operations/retrieve.py", line 239, in request
return self._request(node)
File "xxx/lib/python3.7/site-packages/ncclient/operations/rpc.py", line 348, in _request
raise self._reply.error
ncclient.operations.rpc.RPCError: Request could not be completed because leafref at path "/interfaces/interface[name='Vxlan1']/name" had error "leaf value (Vxlan1) not present in reference path (../config/name)"

So in my script, I make two calls and print the reply:

$ python eos-ncc.py
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:7b1be88e-36b7-4289-a0d2-396a0f21cf5e"><ok></ok></rpc-reply>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:750a50be-3534-442b-bad4-2f8c916afd77"><ok></ok></rpc-reply>

And this is what the logs show:

## first rpc call
2020-08-13T12:52:41.937079+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_ENTERED: User tomas entered configuration session session630614618267084 on NETCONF (172.27.0.1)
2020-08-13T12:52:42.302111+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_COMMIT_SUCCESS: User tomas committed configuration session session630614618267084 successfully on NETCONF (172.27.0.1)
2020-08-13T12:52:42.302928+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_EXITED: User tomas exited configuration session session630614618267084 on NETCONF (172.27.0.1)
2020-08-13T12:52:42.325878+00:00 r01 Launcher: %LAUNCHER-6-PROCESS_START: Configuring process 'HostInject' to start in role 'ActiveSupervisor'
2020-08-13T12:52:42.334151+00:00 r01 Launcher: %LAUNCHER-6-PROCESS_START: Configuring process 'ArpSuppression' to start in role 'ActiveSupervisor'
2020-08-13T12:52:42.369660+00:00 r01 Ebra: %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan100 (VLAN_100), changed state to up
2020-08-13T12:52:42.527568+00:00 r01 ProcMgr-worker: %PROCMGR-6-WORKER_WARMSTART: ProcMgr worker warm start. (PID=553)
2020-08-13T12:52:42.557663+00:00 r01 ProcMgr-worker: %PROCMGR-7-NEW_PROCESSES: New processes configured to run under ProcMgr control: ['ArpSuppression', 'HostInject']
2020-08-13T12:52:42.570208+00:00 r01 ProcMgr-worker: %PROCMGR-7-PROCESSES_ADOPTED: ProcMgr (PID=553) adopted running processes: (SharedSecretProfile, PID=1024) (Lldp, PID=832) (SlabMonitor, PID=555) (Pim, PID=1156) (MplsUtilLsp, PID=902) (Mpls, PID=903) (Isis, PID=1087) (PimBidir, PID=1164) (Igmp, PID=1172) (Acl, PID=920) (StaticRoute, PID=1060) (IgmpSnooping, PID=1030) (IpRib, PID=1064) (Stp, PID=939) (KernelNetworkInfo, PID=940) (Etba, PID=1073) (KernelMfib, PID=1139) (ConnectedRoute, PID=1076) (RouteInput, PID=1080) (EvpnrtrEncap, PID=1082) (McastCommon6, PID=956) (ConfigAgent, PID=702) (Fru, PID=703) (Launcher, PID=704) (Bgp, PID=1089) (McastCommon, PID=834) (SuperServer, PID=836) (OpenConfig, PID=839) (LacpTxAgent, PID=970) (AgentMonitor, PID=845) (Snmp, PID=848) (PortSec, PID=850) (Ira, PID=852) (IgmpHostProxy, PID=1146) (EventMgr, PID=862) (Sysdb, PID=607) (CapiApp, PID=866) (Arp, PID=995) (StpTxRx, PID=871) (KernelFib, PID=1000) (StageMgr, PID=700) (Lag, PID=876) (Qos, PID=1005) (L2Rib, PID=1008) (PimBidirDf, PID=1137) (Tunnel, PID=883) (PimBsr, PID=1150) (Msdp, PID=1142) (BgpCliHelper, PID=1067) (TopoAgent, PID=1017) (Aaa, PID=890) (StpTopology, PID=891) (Ebra, PID=1022) (ReloadCauseAgent, PID=1023)
2020-08-13T12:52:42.586632+00:00 r01 ProcMgr-worker: %PROCMGR-6-PROCESS_STARTED: 'HostInject' starting with PID=23450 (PPID=553) -- execing '/usr/bin/HostInject'
2020-08-13T12:52:42.604711+00:00 r01 ProcMgr-worker: %PROCMGR-7-WORKER_WARMSTART_DONE: ProcMgr worker warm start done. (PID=553)
2020-08-13T12:52:42.604786+00:00 r01 ProcMgr-worker: %PROCMGR-6-PROCESS_STARTED: 'ArpSuppression' starting with PID=23452 (PPID=553) -- execing '/usr/bin/ArpSuppression'
2020-08-13T12:52:42.749880+00:00 r01 HostInject: %AGENT-6-INITIALIZED: Agent 'HostInject' initialized; pid=23453
2020-08-13T12:52:43.102567+00:00 r01 ArpSuppression: %AGENT-6-INITIALIZED: Agent 'ArpSuppression' initialized; pid=23452

## second rpc call
2020-08-13T12:52:43.250995+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_ENTERED: User tomas entered configuration session session630615932519210 on NETCONF (172.27.0.1)
2020-08-13T12:52:43.465035+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_COMMIT_SUCCESS: User tomas committed configuration session session630615932519210 successfully on NETCONF (172.27.0.1)
2020-08-13T12:52:43.466480+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_EXITED: User tomas exited configuration session session630615932519210 on NETCONF (172.27.0.1)
2020-08-13T12:52:43.472728+00:00 r01 Launcher: %LAUNCHER-6-PROCESS_START: Configuring process 'VxlanSwFwd' to start in role 'ActiveSupervisor'
2020-08-13T12:52:43.475470+00:00 r01 Launcher: %LAUNCHER-6-PROCESS_START: Configuring process 'Vxlan' to start in role 'ActiveSupervisor'
2020-08-13T12:52:43.674498+00:00 r01 ProcMgr-worker: %PROCMGR-6-WORKER_WARMSTART: ProcMgr worker warm start. (PID=553)
2020-08-13T12:52:43.701854+00:00 r01 ProcMgr-worker: %PROCMGR-7-NEW_PROCESSES: New processes configured to run under ProcMgr control: ['Vxlan', 'VxlanSwFwd']
2020-08-13T12:52:43.714484+00:00 r01 ProcMgr-worker: %PROCMGR-7-PROCESSES_ADOPTED: ProcMgr (PID=553) adopted running processes: (SharedSecretProfile, PID=1024) (Lldp, PID=832) (SlabMonitor, PID=555) (Pim, PID=1156) (MplsUtilLsp, PID=902) (Mpls, PID=903) (Isis, PID=1087) (PimBidir, PID=1164) (Igmp, PID=1172) (Acl, PID=920) (HostInject, PID=23450) (ArpSuppression, PID=23452) (StaticRoute, PID=1060) (IgmpSnooping, PID=1030) (IpRib, PID=1064) (Stp, PID=939) (KernelNetworkInfo, PID=940) (Etba, PID=1073) (KernelMfib, PID=1139) (ConnectedRoute, PID=1076) (RouteInput, PID=1080) (EvpnrtrEncap, PID=1082) (McastCommon6, PID=956) (ConfigAgent, PID=702) (Fru, PID=703) (Launcher, PID=704) (Bgp, PID=1089) (McastCommon, PID=834) (SuperServer, PID=836) (OpenConfig, PID=839) (LacpTxAgent, PID=970) (AgentMonitor, PID=845) (Snmp, PID=848) (PortSec, PID=850) (Ira, PID=852) (IgmpHostProxy, PID=1146) (EventMgr, PID=862) (Sysdb, PID=607) (CapiApp, PID=866) (Arp, PID=995) (StpTxRx, PID=871) (KernelFib, PID=1000) (StageMgr, PID=700) (Lag, PID=876) (Qos, PID=1005) (L2Rib, PID=1008) (PimBidirDf, PID=1137) (Tunnel, PID=883) (PimBsr, PID=1150) (Msdp, PID=1142) (BgpCliHelper, PID=1067) (TopoAgent, PID=1017) (Aaa, PID=890) (StpTopology, PID=891) (Ebra, PID=1022) (ReloadCauseAgent, PID=1023)
2020-08-13T12:52:43.731810+00:00 r01 ProcMgr-worker: %PROCMGR-6-PROCESS_STARTED: 'Vxlan' starting with PID=23482 (PPID=553) -- execing '/usr/bin/Vxlan'
2020-08-13T12:52:43.746053+00:00 r01 ProcMgr-worker: %PROCMGR-7-WORKER_WARMSTART_DONE: ProcMgr worker warm start done. (PID=553)
2020-08-13T12:52:43.746199+00:00 r01 ProcMgr-worker: %PROCMGR-6-PROCESS_STARTED: 'VxlanSwFwd' starting with PID=23484 (PPID=553) -- execing '/usr/bin/VxlanSwFwd'
2020-08-13T12:52:43.942447+00:00 r01 VxlanSwFwd: %AGENT-6-INITIALIZED: Agent 'VxlanSwFwd' initialized; pid=23487
2020-08-13T12:52:43.974473+00:00 r01 Vxlan: %AGENT-6-INITIALIZED: Agent 'Vxlan' initialized; pid=23485
2020-08-13T12:52:44.310150+00:00 r01 Launcher: %LAUNCHER-6-PROCESS_START: Configuring process 'Fhrp' to start in role 'AllSupervisors'
2020-08-13T12:52:44.512110+00:00 r01 ProcMgr-worker: %PROCMGR-6-WORKER_WARMSTART: ProcMgr worker warm start. (PID=553)
2020-08-13T12:52:44.538052+00:00 r01 ProcMgr-worker: %PROCMGR-7-NEW_PROCESSES: New processes configured to run under ProcMgr control: ['Fhrp']
2020-08-13T12:52:44.550918+00:00 r01 ProcMgr-worker: %PROCMGR-7-PROCESSES_ADOPTED: ProcMgr (PID=553) adopted running processes: (SharedSecretProfile, PID=1024) (Lldp, PID=832) (SlabMonitor, PID=555) (Pim, PID=1156) (MplsUtilLsp, PID=902) (Mpls, PID=903) (Isis, PID=1087) (PimBidir, PID=1164) (Igmp, PID=1172) (Acl, PID=920) (HostInject, PID=23450) (ArpSuppression, PID=23452) (StaticRoute, PID=1060) (IgmpSnooping, PID=1030) (IpRib, PID=1064) (Stp, PID=939) (KernelNetworkInfo, PID=940) (Vxlan, PID=23482) (Etba, PID=1073) (KernelMfib, PID=1139) (ConnectedRoute, PID=1076) (RouteInput, PID=1080) (EvpnrtrEncap, PID=1082) (McastCommon6, PID=956) (ConfigAgent, PID=702) (Fru, PID=703) (Launcher, PID=704) (Bgp, PID=1089) (McastCommon, PID=834) (SuperServer, PID=836) (OpenConfig, PID=839) (LacpTxAgent, PID=970) (AgentMonitor, PID=845) (Snmp, PID=848) (PortSec, PID=850) (Ira, PID=852) (IgmpHostProxy, PID=1146) (EventMgr, PID=862) (Sysdb, PID=607) (CapiApp, PID=866) (Arp, PID=995) (StpTxRx, PID=871) (KernelFib, PID=1000) (StageMgr, PID=700) (VxlanSwFwd, PID=23484) (Lag, PID=876) (Qos, PID=1005) (L2Rib, PID=1008) (PimBidirDf, PID=1137) (Tunnel, PID=883) (PimBsr, PID=1150) (Msdp, PID=1142) (BgpCliHelper, PID=1067) (TopoAgent, PID=1017) (Aaa, PID=890) (StpTopology, PID=891) (Ebra, PID=1022) (ReloadCauseAgent, PID=1023)
2020-08-13T12:52:44.565230+00:00 r01 ProcMgr-worker: %PROCMGR-7-WORKER_WARMSTART_DONE: ProcMgr worker warm start done. (PID=553)
2020-08-13T12:52:44.565339+00:00 r01 ProcMgr-worker: %PROCMGR-6-PROCESS_STARTED: 'Fhrp' starting with PID=23491 (PPID=553) -- execing '/usr/bin/Fhrp'
2020-08-13T12:52:44.720343+00:00 r01 Fhrp: %AGENT-6-INITIALIZED: Agent 'Fhrp' initialized; pid=23493

I still would like to be able to get the full config via netconf. Just copy/paste the rpc in the ssh shell (like juniper) or maybe using ydk like this. And keep dreaming, to be able to fully configure the switch via netconf/ansible.

Streaming vEOS Telemetry

One thing I wanted to get my hands dirty is telemetry. I found this blog post from Arista and I have tried to use it for vEOS.

As per the blog, I had to install go and docker in my VM (debian) on GCP.

Installed docker via aptitude:

# aptitude install docker.io
# aptitude install docker-compose
# aptitude install telnet

Installed golang via updating .bashrc

########################
# Go configuration
########################
#
# git clone -b v0.0.4 https://github.com/wfarr/goenv.git $HOME/.goenv
if [ ! -d "$HOME/.goenv" ]; then
    git clone https://github.com/syndbg/goenv.git $HOME/.goenv
fi
#export GOPATH="$HOME/storage/golang/go"
#export GOBIN="$HOME/storage/golang/go/bin"
#export PATH="$GOPATH/bin:$PATH"
if [ -d "$HOME/.goenv"   ]
then
    export GOENV_ROOT="$HOME/.goenv"
    export PATH="$GOENV_ROOT/bin:$PATH"
    if  type "goenv" &> /dev/null; then
        eval "$(goenv init -)"
        # Add the version to my prompt
        __goversion (){
            if  type "goenv" &> /dev/null; then
                goenv_go_version=$(goenv version | sed -e 's/ .*//')
                printf $goenv_go_version
            fi
        }
        export PS1="go:\$(__goversion)|$PS1"
        export PATH="$GOROOT/bin:$PATH"
        export PATH="$PATH:$GOPATH/bin"
    fi
fi

################## End GoLang #####################

Then started a new bash session to trigger the installation of goenv and then install a go version

$ goenv install 1.14.6
$ goenv global 1.14.6

Now we can start the docker containers for influx and grafana:

mkdir telemetry
cd telemetry
mkdir influxdb_data
mkdir grafana_data

cat docker-compose.yaml 
version: "3"

services:
 influxdb:
   container_name: influxdb-tele
   environment:
     INFLUXDB_DB: grpc
     INFLUXDB_ADMIN_USER: "admin"
     INFLUXDB_ADMIN_PASSWORD: "arista"
     INFLUXDB_USER: tac
     INFLUXDB_USER_PASSWORD: arista
     INFLUXDB_RETENTION_ENABLED: "false"
     INFLUXDB_OPENTSDB_0_ENABLED: "true"
     INFLUXDB_OPENTSDB_BIND_ADDRESS: ":4242"
     INFLUXDB_OPENTSDB_DATABASE: "grpc"
   ports:
     - '8086:8086'
     - '4242:4242'
     - '8083:8083'
   networks:
     - monitoring
   volumes:
     - influxdb_data:/var/lib/influxdb
   command:
     - '-config'
     - '/etc/influxdb/influxdb.conf'
   image: influxdb:latest
   restart: always

 grafana:
   container_name: grafana-tele
   environment:
     GF_SECURITY_ADMIN_USER: admin
     GF_SECURITY_ADMIN_PASSWORD: arista
   ports:
     - '3000:3000'
   networks:
     - monitoring
   volumes:
     - grafana_data:/var/lib/grafana
   image: grafana/grafana
   restart: always

networks:
 monitoring:

volumes:
 influxdb_data: {}
 grafana_data: {}

Now we should be able to start the containers:

sudo docker-compose up -d    // start containers
sudo docker-compose down -v  // for stopping containers

sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cad339ead2ee influxdb:latest "/entrypoint.sh -con…" 5 hours ago Up 5 hours 0.0.0.0:4242->4242/tcp, 0.0.0.0:8083->8083/tcp, 0.0.0.0:8086->8086/tcp influxdb-tele
ef88acc47ee3 grafana/grafana "/run.sh" 5 hours ago Up 5 hours 0.0.0.0:3000->3000/tcp grafana-tele

sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
fe19e7876636 bridge bridge local
0a8770578f3f host host local
6e128a7682f1 none null local
3d27d0ed3ab3 telemetry_monitoring bridge local

sudo docker network inspect telemetry_monitoring
[
{
"Name": "telemetry_monitoring",
"Id": "3d27d0ed3ab3b0530441206a128d849434a540f8e5a2c109ee368b01052ed418",
"Created": "2020-08-12T11:22:03.05783331Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"cad339ead2eeb0b479bd6aa024cb2150fb1643a0a4a59e7729bb5ddf088eba19": {
"Name": "influxdb-tele",
"EndpointID": "e3c7f853766ed8afe6617c8fac358b3302de41f8aeab53d429ffd1a28b6df668",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"ef88acc47ee30667768c5af9bbd70b95903d3690c4d80b83ba774b298665d15d": {
"Name": "grafana-tele",
"EndpointID": "3fe2b424cbb66a93e9e06f4bcc2e7353a0b40b2d56777c8fee8726c96c97229a",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "monitoring",
"com.docker.compose.project": "telemetry"
}
}
]

Now we have to generate the octsdb binary and copy it to the switches as per instructions

$ go get github.com/aristanetworks/goarista/cmd/octsdb
$ cd $GOPATH/src/github.com/aristanetworks/goarista/cmd/octsdb

$ GOOS=linux GOARCH=386 go build   // I used this one
$ GOOS=linux GOARCH=amd64 go build // if you use EOS 64 Bits

$ scp octsdb user@SWITCH_IP:/mnt/flash/

An important thing is the configuration file for octsdb. I struggled trying to get a config file that provided me CPU and interface counters. All the examples are based on hardware platforms but I am using containers/VMs. But using this other blog post, I worked out the path for the interfaces in vEOS.

This is what I see in vEOS:

bash-4.2# curl localhost:6060/rest/Smash/counters/ethIntf/EtbaDut/current
{
    "counter": {
        "Ethernet1": {
            "counterRefreshTime": 0,
            "ethStatistics": {
...

bash-4.2# curl localhost:6060/rest/Kernel/proc/cpu/utilization/cpu/0
{
    "idle": 293338,
    "name": "0",
    "nice": 2965,
    "system": 1157399,
    "user": 353004,
    "util": 100
}

And this is what I see in cEOS. It seems this is not functional:

bash-4.2# curl localhost:6060/rest/Smash/counters/ethIntf
curl: (7) Failed to connect to localhost port 6060: Connection refused
bash-4.2# 
bash-4.2# 
bash-4.2# curl localhost:6060/rest/Kernel/proc/cpu/utilization/cpu/0
curl: (7) Failed to connect to localhost port 6060: Connection refused
bash-4.2# 

This is the file I have used and pushed to the switches:

$ cat veos4.23.json 
{
	"comment": "This is a sample configuration for vEOS 4.23",
	"subscriptions": [
		"/Smash/counters/ethIntf",
		"/Kernel/proc/cpu/utilization"
	],
	"metricPrefix": "eos",
	"metrics": {
		"intfCounter": {
			"path": "/Smash/counters/ethIntf/EtbaDut/current/(counter)/(?P<intf>.+)/statistics/(?P<direction>(?:in|out))(Octets|Errors|Discards)"
		},
		"intfPktCounter": {
			"path": "/Smash/counters/ethIntf/EtbaDut/current/(counter)/(?P<intf>.+)/statistics/(?P<direction>(?:in|out))(?P<type>(?:Ucast|Multicast|Broadcast))(Pkt)"
		},
                "totalCpu": {
                        "path": "/Kernel/proc/(cpu)/(utilization)/(total)/(?P<type>.+)"
                },
                "coreCpu": {
                        "path": "/Kernel/proc/(cpu)/(utilization)/(.+)/(?P<type>.+)"
                }
	}
}
$ scp veos4.23.json r1:/mnt/flash

Now you have to configure the switch to generate and send the data. In my case, I am using the MGMT vrf.

!
daemon TerminAttr
   exec /usr/bin/TerminAttr -disableaaa -grpcaddr MGMT/0.0.0.0:6042
   no shutdown
!
daemon octsdb
   exec /sbin/ip netns exec ns-MGMT /mnt/flash/octsdb -addr 192.168.249.4:6042 -config /mnt/flash/veos4.23.json -tsdb 10.128.0.4:4242
   no shutdown
!

TermiAttr it is listening on 0.0.0.0:6042. “octsdb” is using the mgmt IP 192.168.249.4 (that is in MGMT vrf) to connect to the influxdb container that is running in 10.128.0.4:4242

Verify that things are running:

# From the switch
show agent octsdb log

# From InfluxDB container
$ sudo docker exec -it influxdb-tele bash
root@cad339ead2ee:/# influx -precision 'rfc3339'
Connected to http://localhost:8086 version 1.8.1
InfluxDB shell version: 1.8.1
> show databases
name: databases
name
----
grpc
_internal
> use grpc
Using database grpc
> show measurements
name: measurements
name
----
eos.corecpu.cpu.utilization._counts
eos.corecpu.cpu.utilization.cpu.0
eos.corecpu.cpu.utilization.total
eos.intfcounter.counter.discards
eos.intfcounter.counter.errors
eos.intfcounter.counter.octets
eos.intfpktcounter.counter.pkt
eos.totalcpu.cpu.utilization.total
> exit
root@cad339ead2ee:/# exit

Now we need to configure grafana. So first we create a connection to influxdb. Again, I struggled with the URL. Influx and grafana are two containers running in the same host. I was using initially localhost and it was failing. At the end I had to find out the IP assigned to the influxdb container and use it.

Now you can create a dashboard with panels.

For octet counters:

For packet types:

For CPU usage:

And this is the dashboard:

Keep in mind that I am not 100% sure my grafana panels are correct for CPU and Counters (PktCounter makes sense)

At some point, I want to try telemetry for LANZ via this telerista plugin.

Multicast + 5G

I have been cleaning up my email box and found some interesting stuff. This is from APNIC regarding a new approach to deploy multicast. Slides from nanog page (check tuesday) In my former employer, we suffered traffic congestion several times after some famous games got new updates. So it is interesting that Akamai is trying to deploy inter-domain multicast in the internet. They have a massive network and I guess they suffered with those updates and this is an attempt to “digest” better those spikes. At minute 16 you can see the network changes required. It doesnt look like a quick/easy change but would be a great thing to happen.

And reading a nanog conversation about 5G I realised that this technology promises high bandwidth (and that could fix the issue of requiring multicast). But still we should have a smarter way to deliver same content to eyeball networks?

From the nanog thread, there are several links to videos about 5G like this from verizon that gives the vision from a big provider and its providers (not technical). This one is more technical with 5G terms (I lost contact of Telco term with early 4G). As well, I see mentioning kubernetes in 5G deployments quite often. I guess something new to learn.

Vim + Golang

I am trying to learn a bit of golang (although sometimes I think I should try to master python and bash first..) with this udemy course.

The same way I have pyenv/virtualenv to create python environments, I want to do the same for golang. So for that we have goenv:

Based on goenv install instructions and ex-collegue snipset, this is my goenv snipset in .bashrc:

########################
# Go configuration
########################
#
# git clone -b v0.0.4 https://github.com/wfarr/goenv.git $HOME/.goenv
if [ ! -d "$HOME/.goenv" ]; then
    git clone https://github.com/syndbg/goenv.git $HOME/.goenv
fi

if [ -d "$HOME/.goenv"   ]
then
    export GOENV_ROOT="$HOME/.goenv"
    export PATH="$GOENV_ROOT/bin:$PATH"
    if  type "goenv" &> /dev/null; then
        eval "$(goenv init -)"
        # Add the version to my prompt
        __goversion (){
            if  type "goenv" &> /dev/null; then
                goenv_go_version=$(goenv version | sed -e 's/ .*//')
                printf $goenv_go_version
            fi
        }
        #PS1_GO="go:\$(__goversion) "
        export PS1="go:\$(__goversion)|$PS1"
        export PATH="$GOROOT/bin:$PATH"
        export PATH="$PATH:$GOPATH/bin"
    fi
fi

################## End GoLang #####################

From time to time, remember to go to ~.goenv and do a “git pull” to get the latest versions of golang.

Ok, once we can install any golang version, I was thinking about the equivalent to python virtualenv, but it seems it is not really needed in golang. At the moment, I am super beginner so no rush about this.

And finally, as I try to use VIM for everything so I can keep learning, I want to use similar python plugins for golang. So I searched and this one looks quite good: vim-go

So I updated vundle config in .vimrc:

Plugin 'fatih/vim-go', { 'do': ':GoUpdateBinaries' }

Then install the new new plugin once in VIM.

:PluginInstall

or >

:PluginUpdate

There is a good tutorial you can follow to learn the new commands.

I am happy enough with “GoRun”, “GoFmt”, “GoImports”, “GoTest”

Keep practising, Keep learning.

OOB

I was reading this blog and realised that OOB is something is not talked about very often. Based on what I have seen in my career:

Design

You need to sell the idea that this is a must. Then you need to secure some budget. You dont need much:

1x switch

1x firewall

1x Internet access (if you have your ASN and IP range, dont use it)

Keep it simple..

Most network kit (firewalls, routers, switches, pdus, console servers, etc) have 1xmgmt port and 1xconsole port. So all those need to go to the console server. I guess most server vendors offer some OOB access (I just know Dell and HP). So all those go to the oob switch.

If you have a massive network with hundreds of devices/servers, then you will need more oob switches and console servers. You still need just one firewall and 1 internet connection. The blog comments about the spine-leaf oob network. I guess this is the way for a massive network/DC.

Access to OOB

You need to be able to access it via your corporate network and from anywhere in the internet.

You need to be sure linux/windows/macs can VPN.

Use very strong passwords and keys.

You need to be sure the oob firewall is quite tight in access. At the end of the day you only want to allow ssh to the console server and https to the ILO/iDRACS. Nothing initiated internally can go to the internet.

Dependencies

Think in the worse scenario. Your DNS server is down. Your authentication is down.

You need to be sure you have local auth enabled in all devices for emergency

You need to work out some DNS service. Write the key IPs in the documentation?

You IP transit has to be reliable. You dont need a massive pipe but you need to be sure it is up.

Monitoring

You dont want to be in the middle of the outage and realise that your OOB is not functional. You need to be sure the ISP for the OOB is up and the devices (oob switch and oob firewall) are functional all the time.

How to check the serial connections? conserver.com

Documentation

Another point frequently lost. You need to be sure people can find info about the OOB: how is built and how to access it.

Training

At the end of the day, if you have a super OOB network but then nobody knows how to connect and use it, then it is useful. Schedule routine checkups with the team to be sure everybody can OOB. This is useful when you get a call at 3am.

Diagram

Update

Funny enough, I was watching today NLNOG live and there was a presentation about OOB with too different approaches: in-band out-of-band and pure out-of-band.

From the NTT side, I liked the comment about conserver.com to manage your serial connections. I will try to use it once I have access to a new network.