cEOS Netconf – Ncclient

I am still trying to play with / understand Openconfig/YANG/Netconf modelling. Initially I tried to use ansible to configure EOS via netconf but I didnt get very far 🙁

I have found an Arista blog to deal with netconf using the python library ncclient.

This is my adapted code. Keep in mind that I think there is a typo/bug in Arista blog in “def irbrpc(..)” as it should return “snetrpc” instead of “irbrpc”. This is the error I had:

Traceback (most recent call last):
File "eos-ncc.py", line 171, in
main()
File "eos-ncc.py", line 168, in main
execrpc(hostip, uname, passw, rpc)
File "eos-ncc.py", line 7, in execrpc
rpcreply = conn.dispatch(to_ele(rpc))
File "xxx/lib/python3.7/site-packages/ncclient/xml_.py", line 126, in to_ele
return x if etree.iselement(x) else etree.fromstring(x.encode('UTF-8'), parser=_get_parser(huge_tree))
AttributeError: 'function' object has no attribute 'encode'

After a couple of prints in “ncclient/xml_.py” I could see “x” was a function but I couldnt understand why. Just by chance I notices the typo in the return.

As well, I couldn’t configure the vxlan interface using XML as per the blog and I had to remove it and add it via a RPC call with CLI commands “intfrpcvxlan_cli”. This is the error I had:

Traceback (most recent call last):
File "eos-ncc.py", line 171, in
main()
File "eos-ncc.py", line 168, in main
execrpc(hostip, uname, passw, rpc)
File "eos-ncc.py", line 7, in execrpc
rpcreply = conn.dispatch(to_ele(rpc))
File "xxx/lib/python3.7/site-packages/ncclient/manager.py", line 236, in execute
huge_tree=self._huge_tree).request(*args, **kwds)
File "xxx/lib/python3.7/site-packages/ncclient/operations/retrieve.py", line 239, in request
return self._request(node)
File "xxx/lib/python3.7/site-packages/ncclient/operations/rpc.py", line 348, in _request
raise self._reply.error
ncclient.operations.rpc.RPCError: Request could not be completed because leafref at path "/interfaces/interface[name='Vxlan1']/name" had error "leaf value (Vxlan1) not present in reference path (../config/name)"

So in my script, I make two calls and print the reply:

$ python eos-ncc.py
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:7b1be88e-36b7-4289-a0d2-396a0f21cf5e"><ok></ok></rpc-reply>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="urn:uuid:750a50be-3534-442b-bad4-2f8c916afd77"><ok></ok></rpc-reply>

And this is what the logs show:

## first rpc call
2020-08-13T12:52:41.937079+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_ENTERED: User tomas entered configuration session session630614618267084 on NETCONF (172.27.0.1)
2020-08-13T12:52:42.302111+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_COMMIT_SUCCESS: User tomas committed configuration session session630614618267084 successfully on NETCONF (172.27.0.1)
2020-08-13T12:52:42.302928+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_EXITED: User tomas exited configuration session session630614618267084 on NETCONF (172.27.0.1)
2020-08-13T12:52:42.325878+00:00 r01 Launcher: %LAUNCHER-6-PROCESS_START: Configuring process 'HostInject' to start in role 'ActiveSupervisor'
2020-08-13T12:52:42.334151+00:00 r01 Launcher: %LAUNCHER-6-PROCESS_START: Configuring process 'ArpSuppression' to start in role 'ActiveSupervisor'
2020-08-13T12:52:42.369660+00:00 r01 Ebra: %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan100 (VLAN_100), changed state to up
2020-08-13T12:52:42.527568+00:00 r01 ProcMgr-worker: %PROCMGR-6-WORKER_WARMSTART: ProcMgr worker warm start. (PID=553)
2020-08-13T12:52:42.557663+00:00 r01 ProcMgr-worker: %PROCMGR-7-NEW_PROCESSES: New processes configured to run under ProcMgr control: ['ArpSuppression', 'HostInject']
2020-08-13T12:52:42.570208+00:00 r01 ProcMgr-worker: %PROCMGR-7-PROCESSES_ADOPTED: ProcMgr (PID=553) adopted running processes: (SharedSecretProfile, PID=1024) (Lldp, PID=832) (SlabMonitor, PID=555) (Pim, PID=1156) (MplsUtilLsp, PID=902) (Mpls, PID=903) (Isis, PID=1087) (PimBidir, PID=1164) (Igmp, PID=1172) (Acl, PID=920) (StaticRoute, PID=1060) (IgmpSnooping, PID=1030) (IpRib, PID=1064) (Stp, PID=939) (KernelNetworkInfo, PID=940) (Etba, PID=1073) (KernelMfib, PID=1139) (ConnectedRoute, PID=1076) (RouteInput, PID=1080) (EvpnrtrEncap, PID=1082) (McastCommon6, PID=956) (ConfigAgent, PID=702) (Fru, PID=703) (Launcher, PID=704) (Bgp, PID=1089) (McastCommon, PID=834) (SuperServer, PID=836) (OpenConfig, PID=839) (LacpTxAgent, PID=970) (AgentMonitor, PID=845) (Snmp, PID=848) (PortSec, PID=850) (Ira, PID=852) (IgmpHostProxy, PID=1146) (EventMgr, PID=862) (Sysdb, PID=607) (CapiApp, PID=866) (Arp, PID=995) (StpTxRx, PID=871) (KernelFib, PID=1000) (StageMgr, PID=700) (Lag, PID=876) (Qos, PID=1005) (L2Rib, PID=1008) (PimBidirDf, PID=1137) (Tunnel, PID=883) (PimBsr, PID=1150) (Msdp, PID=1142) (BgpCliHelper, PID=1067) (TopoAgent, PID=1017) (Aaa, PID=890) (StpTopology, PID=891) (Ebra, PID=1022) (ReloadCauseAgent, PID=1023)
2020-08-13T12:52:42.586632+00:00 r01 ProcMgr-worker: %PROCMGR-6-PROCESS_STARTED: 'HostInject' starting with PID=23450 (PPID=553) -- execing '/usr/bin/HostInject'
2020-08-13T12:52:42.604711+00:00 r01 ProcMgr-worker: %PROCMGR-7-WORKER_WARMSTART_DONE: ProcMgr worker warm start done. (PID=553)
2020-08-13T12:52:42.604786+00:00 r01 ProcMgr-worker: %PROCMGR-6-PROCESS_STARTED: 'ArpSuppression' starting with PID=23452 (PPID=553) -- execing '/usr/bin/ArpSuppression'
2020-08-13T12:52:42.749880+00:00 r01 HostInject: %AGENT-6-INITIALIZED: Agent 'HostInject' initialized; pid=23453
2020-08-13T12:52:43.102567+00:00 r01 ArpSuppression: %AGENT-6-INITIALIZED: Agent 'ArpSuppression' initialized; pid=23452

## second rpc call
2020-08-13T12:52:43.250995+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_ENTERED: User tomas entered configuration session session630615932519210 on NETCONF (172.27.0.1)
2020-08-13T12:52:43.465035+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_COMMIT_SUCCESS: User tomas committed configuration session session630615932519210 successfully on NETCONF (172.27.0.1)
2020-08-13T12:52:43.466480+00:00 r01 ConfigAgent: %SYS-5-CONFIG_SESSION_EXITED: User tomas exited configuration session session630615932519210 on NETCONF (172.27.0.1)
2020-08-13T12:52:43.472728+00:00 r01 Launcher: %LAUNCHER-6-PROCESS_START: Configuring process 'VxlanSwFwd' to start in role 'ActiveSupervisor'
2020-08-13T12:52:43.475470+00:00 r01 Launcher: %LAUNCHER-6-PROCESS_START: Configuring process 'Vxlan' to start in role 'ActiveSupervisor'
2020-08-13T12:52:43.674498+00:00 r01 ProcMgr-worker: %PROCMGR-6-WORKER_WARMSTART: ProcMgr worker warm start. (PID=553)
2020-08-13T12:52:43.701854+00:00 r01 ProcMgr-worker: %PROCMGR-7-NEW_PROCESSES: New processes configured to run under ProcMgr control: ['Vxlan', 'VxlanSwFwd']
2020-08-13T12:52:43.714484+00:00 r01 ProcMgr-worker: %PROCMGR-7-PROCESSES_ADOPTED: ProcMgr (PID=553) adopted running processes: (SharedSecretProfile, PID=1024) (Lldp, PID=832) (SlabMonitor, PID=555) (Pim, PID=1156) (MplsUtilLsp, PID=902) (Mpls, PID=903) (Isis, PID=1087) (PimBidir, PID=1164) (Igmp, PID=1172) (Acl, PID=920) (HostInject, PID=23450) (ArpSuppression, PID=23452) (StaticRoute, PID=1060) (IgmpSnooping, PID=1030) (IpRib, PID=1064) (Stp, PID=939) (KernelNetworkInfo, PID=940) (Etba, PID=1073) (KernelMfib, PID=1139) (ConnectedRoute, PID=1076) (RouteInput, PID=1080) (EvpnrtrEncap, PID=1082) (McastCommon6, PID=956) (ConfigAgent, PID=702) (Fru, PID=703) (Launcher, PID=704) (Bgp, PID=1089) (McastCommon, PID=834) (SuperServer, PID=836) (OpenConfig, PID=839) (LacpTxAgent, PID=970) (AgentMonitor, PID=845) (Snmp, PID=848) (PortSec, PID=850) (Ira, PID=852) (IgmpHostProxy, PID=1146) (EventMgr, PID=862) (Sysdb, PID=607) (CapiApp, PID=866) (Arp, PID=995) (StpTxRx, PID=871) (KernelFib, PID=1000) (StageMgr, PID=700) (Lag, PID=876) (Qos, PID=1005) (L2Rib, PID=1008) (PimBidirDf, PID=1137) (Tunnel, PID=883) (PimBsr, PID=1150) (Msdp, PID=1142) (BgpCliHelper, PID=1067) (TopoAgent, PID=1017) (Aaa, PID=890) (StpTopology, PID=891) (Ebra, PID=1022) (ReloadCauseAgent, PID=1023)
2020-08-13T12:52:43.731810+00:00 r01 ProcMgr-worker: %PROCMGR-6-PROCESS_STARTED: 'Vxlan' starting with PID=23482 (PPID=553) -- execing '/usr/bin/Vxlan'
2020-08-13T12:52:43.746053+00:00 r01 ProcMgr-worker: %PROCMGR-7-WORKER_WARMSTART_DONE: ProcMgr worker warm start done. (PID=553)
2020-08-13T12:52:43.746199+00:00 r01 ProcMgr-worker: %PROCMGR-6-PROCESS_STARTED: 'VxlanSwFwd' starting with PID=23484 (PPID=553) -- execing '/usr/bin/VxlanSwFwd'
2020-08-13T12:52:43.942447+00:00 r01 VxlanSwFwd: %AGENT-6-INITIALIZED: Agent 'VxlanSwFwd' initialized; pid=23487
2020-08-13T12:52:43.974473+00:00 r01 Vxlan: %AGENT-6-INITIALIZED: Agent 'Vxlan' initialized; pid=23485
2020-08-13T12:52:44.310150+00:00 r01 Launcher: %LAUNCHER-6-PROCESS_START: Configuring process 'Fhrp' to start in role 'AllSupervisors'
2020-08-13T12:52:44.512110+00:00 r01 ProcMgr-worker: %PROCMGR-6-WORKER_WARMSTART: ProcMgr worker warm start. (PID=553)
2020-08-13T12:52:44.538052+00:00 r01 ProcMgr-worker: %PROCMGR-7-NEW_PROCESSES: New processes configured to run under ProcMgr control: ['Fhrp']
2020-08-13T12:52:44.550918+00:00 r01 ProcMgr-worker: %PROCMGR-7-PROCESSES_ADOPTED: ProcMgr (PID=553) adopted running processes: (SharedSecretProfile, PID=1024) (Lldp, PID=832) (SlabMonitor, PID=555) (Pim, PID=1156) (MplsUtilLsp, PID=902) (Mpls, PID=903) (Isis, PID=1087) (PimBidir, PID=1164) (Igmp, PID=1172) (Acl, PID=920) (HostInject, PID=23450) (ArpSuppression, PID=23452) (StaticRoute, PID=1060) (IgmpSnooping, PID=1030) (IpRib, PID=1064) (Stp, PID=939) (KernelNetworkInfo, PID=940) (Vxlan, PID=23482) (Etba, PID=1073) (KernelMfib, PID=1139) (ConnectedRoute, PID=1076) (RouteInput, PID=1080) (EvpnrtrEncap, PID=1082) (McastCommon6, PID=956) (ConfigAgent, PID=702) (Fru, PID=703) (Launcher, PID=704) (Bgp, PID=1089) (McastCommon, PID=834) (SuperServer, PID=836) (OpenConfig, PID=839) (LacpTxAgent, PID=970) (AgentMonitor, PID=845) (Snmp, PID=848) (PortSec, PID=850) (Ira, PID=852) (IgmpHostProxy, PID=1146) (EventMgr, PID=862) (Sysdb, PID=607) (CapiApp, PID=866) (Arp, PID=995) (StpTxRx, PID=871) (KernelFib, PID=1000) (StageMgr, PID=700) (VxlanSwFwd, PID=23484) (Lag, PID=876) (Qos, PID=1005) (L2Rib, PID=1008) (PimBidirDf, PID=1137) (Tunnel, PID=883) (PimBsr, PID=1150) (Msdp, PID=1142) (BgpCliHelper, PID=1067) (TopoAgent, PID=1017) (Aaa, PID=890) (StpTopology, PID=891) (Ebra, PID=1022) (ReloadCauseAgent, PID=1023)
2020-08-13T12:52:44.565230+00:00 r01 ProcMgr-worker: %PROCMGR-7-WORKER_WARMSTART_DONE: ProcMgr worker warm start done. (PID=553)
2020-08-13T12:52:44.565339+00:00 r01 ProcMgr-worker: %PROCMGR-6-PROCESS_STARTED: 'Fhrp' starting with PID=23491 (PPID=553) -- execing '/usr/bin/Fhrp'
2020-08-13T12:52:44.720343+00:00 r01 Fhrp: %AGENT-6-INITIALIZED: Agent 'Fhrp' initialized; pid=23493

I still would like to be able to get the full config via netconf. Just copy/paste the rpc in the ssh shell (like juniper) or maybe using ydk like this. And keep dreaming, to be able to fully configure the switch via netconf/ansible.

Streaming vEOS Telemetry

One thing I wanted to get my hands dirty is telemetry. I found this blog post from Arista and I have tried to use it for vEOS.

As per the blog, I had to install go and docker in my VM (debian) on GCP.

Installed docker via aptitude:

# aptitude install docker.io
# aptitude install docker-compose
# aptitude install telnet

Installed golang via updating .bashrc

########################
# Go configuration
########################
#
# git clone -b v0.0.4 https://github.com/wfarr/goenv.git $HOME/.goenv
if [ ! -d "$HOME/.goenv" ]; then
    git clone https://github.com/syndbg/goenv.git $HOME/.goenv
fi
#export GOPATH="$HOME/storage/golang/go"
#export GOBIN="$HOME/storage/golang/go/bin"
#export PATH="$GOPATH/bin:$PATH"
if [ -d "$HOME/.goenv"   ]
then
    export GOENV_ROOT="$HOME/.goenv"
    export PATH="$GOENV_ROOT/bin:$PATH"
    if  type "goenv" &> /dev/null; then
        eval "$(goenv init -)"
        # Add the version to my prompt
        __goversion (){
            if  type "goenv" &> /dev/null; then
                goenv_go_version=$(goenv version | sed -e 's/ .*//')
                printf $goenv_go_version
            fi
        }
        export PS1="go:\$(__goversion)|$PS1"
        export PATH="$GOROOT/bin:$PATH"
        export PATH="$PATH:$GOPATH/bin"
    fi
fi

################## End GoLang #####################

Then started a new bash session to trigger the installation of goenv and then install a go version

$ goenv install 1.14.6
$ goenv global 1.14.6

Now we can start the docker containers for influx and grafana:

mkdir telemetry
cd telemetry
mkdir influxdb_data
mkdir grafana_data

cat docker-compose.yaml 
version: "3"

services:
 influxdb:
   container_name: influxdb-tele
   environment:
     INFLUXDB_DB: grpc
     INFLUXDB_ADMIN_USER: "admin"
     INFLUXDB_ADMIN_PASSWORD: "arista"
     INFLUXDB_USER: tac
     INFLUXDB_USER_PASSWORD: arista
     INFLUXDB_RETENTION_ENABLED: "false"
     INFLUXDB_OPENTSDB_0_ENABLED: "true"
     INFLUXDB_OPENTSDB_BIND_ADDRESS: ":4242"
     INFLUXDB_OPENTSDB_DATABASE: "grpc"
   ports:
     - '8086:8086'
     - '4242:4242'
     - '8083:8083'
   networks:
     - monitoring
   volumes:
     - influxdb_data:/var/lib/influxdb
   command:
     - '-config'
     - '/etc/influxdb/influxdb.conf'
   image: influxdb:latest
   restart: always

 grafana:
   container_name: grafana-tele
   environment:
     GF_SECURITY_ADMIN_USER: admin
     GF_SECURITY_ADMIN_PASSWORD: arista
   ports:
     - '3000:3000'
   networks:
     - monitoring
   volumes:
     - grafana_data:/var/lib/grafana
   image: grafana/grafana
   restart: always

networks:
 monitoring:

volumes:
 influxdb_data: {}
 grafana_data: {}

Now we should be able to start the containers:

sudo docker-compose up -d    // start containers
sudo docker-compose down -v  // for stopping containers

sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cad339ead2ee influxdb:latest "/entrypoint.sh -con…" 5 hours ago Up 5 hours 0.0.0.0:4242->4242/tcp, 0.0.0.0:8083->8083/tcp, 0.0.0.0:8086->8086/tcp influxdb-tele
ef88acc47ee3 grafana/grafana "/run.sh" 5 hours ago Up 5 hours 0.0.0.0:3000->3000/tcp grafana-tele

sudo docker network ls
NETWORK ID NAME DRIVER SCOPE
fe19e7876636 bridge bridge local
0a8770578f3f host host local
6e128a7682f1 none null local
3d27d0ed3ab3 telemetry_monitoring bridge local

sudo docker network inspect telemetry_monitoring
[
{
"Name": "telemetry_monitoring",
"Id": "3d27d0ed3ab3b0530441206a128d849434a540f8e5a2c109ee368b01052ed418",
"Created": "2020-08-12T11:22:03.05783331Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"cad339ead2eeb0b479bd6aa024cb2150fb1643a0a4a59e7729bb5ddf088eba19": {
"Name": "influxdb-tele",
"EndpointID": "e3c7f853766ed8afe6617c8fac358b3302de41f8aeab53d429ffd1a28b6df668",
"MacAddress": "02:42:ac:12:00:03",
"IPv4Address": "172.18.0.3/16",
"IPv6Address": ""
},
"ef88acc47ee30667768c5af9bbd70b95903d3690c4d80b83ba774b298665d15d": {
"Name": "grafana-tele",
"EndpointID": "3fe2b424cbb66a93e9e06f4bcc2e7353a0b40b2d56777c8fee8726c96c97229a",
"MacAddress": "02:42:ac:12:00:02",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "monitoring",
"com.docker.compose.project": "telemetry"
}
}
]

Now we have to generate the octsdb binary and copy it to the switches as per instructions

$ go get github.com/aristanetworks/goarista/cmd/octsdb
$ cd $GOPATH/src/github.com/aristanetworks/goarista/cmd/octsdb

$ GOOS=linux GOARCH=386 go build   // I used this one
$ GOOS=linux GOARCH=amd64 go build // if you use EOS 64 Bits

$ scp octsdb user@SWITCH_IP:/mnt/flash/

An important thing is the configuration file for octsdb. I struggled trying to get a config file that provided me CPU and interface counters. All the examples are based on hardware platforms but I am using containers/VMs. But using this other blog post, I worked out the path for the interfaces in vEOS.

This is what I see in vEOS:

bash-4.2# curl localhost:6060/rest/Smash/counters/ethIntf/EtbaDut/current
{
    "counter": {
        "Ethernet1": {
            "counterRefreshTime": 0,
            "ethStatistics": {
...

bash-4.2# curl localhost:6060/rest/Kernel/proc/cpu/utilization/cpu/0
{
    "idle": 293338,
    "name": "0",
    "nice": 2965,
    "system": 1157399,
    "user": 353004,
    "util": 100
}

And this is what I see in cEOS. It seems this is not functional:

bash-4.2# curl localhost:6060/rest/Smash/counters/ethIntf
curl: (7) Failed to connect to localhost port 6060: Connection refused
bash-4.2# 
bash-4.2# 
bash-4.2# curl localhost:6060/rest/Kernel/proc/cpu/utilization/cpu/0
curl: (7) Failed to connect to localhost port 6060: Connection refused
bash-4.2# 

This is the file I have used and pushed to the switches:

$ cat veos4.23.json 
{
	"comment": "This is a sample configuration for vEOS 4.23",
	"subscriptions": [
		"/Smash/counters/ethIntf",
		"/Kernel/proc/cpu/utilization"
	],
	"metricPrefix": "eos",
	"metrics": {
		"intfCounter": {
			"path": "/Smash/counters/ethIntf/EtbaDut/current/(counter)/(?P<intf>.+)/statistics/(?P<direction>(?:in|out))(Octets|Errors|Discards)"
		},
		"intfPktCounter": {
			"path": "/Smash/counters/ethIntf/EtbaDut/current/(counter)/(?P<intf>.+)/statistics/(?P<direction>(?:in|out))(?P<type>(?:Ucast|Multicast|Broadcast))(Pkt)"
		},
                "totalCpu": {
                        "path": "/Kernel/proc/(cpu)/(utilization)/(total)/(?P<type>.+)"
                },
                "coreCpu": {
                        "path": "/Kernel/proc/(cpu)/(utilization)/(.+)/(?P<type>.+)"
                }
	}
}
$ scp veos4.23.json r1:/mnt/flash

Now you have to configure the switch to generate and send the data. In my case, I am using the MGMT vrf.

!
daemon TerminAttr
   exec /usr/bin/TerminAttr -disableaaa -grpcaddr MGMT/0.0.0.0:6042
   no shutdown
!
daemon octsdb
   exec /sbin/ip netns exec ns-MGMT /mnt/flash/octsdb -addr 192.168.249.4:6042 -config /mnt/flash/veos4.23.json -tsdb 10.128.0.4:4242
   no shutdown
!

TermiAttr it is listening on 0.0.0.0:6042. “octsdb” is using the mgmt IP 192.168.249.4 (that is in MGMT vrf) to connect to the influxdb container that is running in 10.128.0.4:4242

Verify that things are running:

# From the switch
show agent octsdb log

# From InfluxDB container
$ sudo docker exec -it influxdb-tele bash
root@cad339ead2ee:/# influx -precision 'rfc3339'
Connected to http://localhost:8086 version 1.8.1
InfluxDB shell version: 1.8.1
> show databases
name: databases
name
----
grpc
_internal
> use grpc
Using database grpc
> show measurements
name: measurements
name
----
eos.corecpu.cpu.utilization._counts
eos.corecpu.cpu.utilization.cpu.0
eos.corecpu.cpu.utilization.total
eos.intfcounter.counter.discards
eos.intfcounter.counter.errors
eos.intfcounter.counter.octets
eos.intfpktcounter.counter.pkt
eos.totalcpu.cpu.utilization.total
> exit
root@cad339ead2ee:/# exit

Now we need to configure grafana. So first we create a connection to influxdb. Again, I struggled with the URL. Influx and grafana are two containers running in the same host. I was using initially localhost and it was failing. At the end I had to find out the IP assigned to the influxdb container and use it.

Now you can create a dashboard with panels.

For octet counters:

For packet types:

For CPU usage:

And this is the dashboard:

Keep in mind that I am not 100% sure my grafana panels are correct for CPU and Counters (PktCounter makes sense)

At some point, I want to try telemetry for LANZ via this telerista plugin.

Multicast + 5G

I have been cleaning up my email box and found some interesting stuff. This is from APNIC regarding a new approach to deploy multicast. Slides from nanog page (check tuesday) In my former employer, we suffered traffic congestion several times after some famous games got new updates. So it is interesting that Akamai is trying to deploy inter-domain multicast in the internet. They have a massive network and I guess they suffered with those updates and this is an attempt to “digest” better those spikes. At minute 16 you can see the network changes required. It doesnt look like a quick/easy change but would be a great thing to happen.

And reading a nanog conversation about 5G I realised that this technology promises high bandwidth (and that could fix the issue of requiring multicast). But still we should have a smarter way to deliver same content to eyeball networks?

From the nanog thread, there are several links to videos about 5G like this from verizon that gives the vision from a big provider and its providers (not technical). This one is more technical with 5G terms (I lost contact of Telco term with early 4G). As well, I see mentioning kubernetes in 5G deployments quite often. I guess something new to learn.