VMware Co-stop / LPM in hardware

This is a very interesting article about how Longest Prefix Matching is done in networks chips. I remember reading about bloom filters in some Cloudfare blog but I didnt think that would be use too in network chips. As well, I forgot how critical is LPM in networking.

I had to deal lately with some performance issues with an application running in a VM. I am quite a noob regarding virtualization and always assumed the bigger the VM the better (very masculine thing I guess…) But a virtualization expert at work explained me issues regarding that assumption with this link. I learnt a lot from it (still a noob though). But I agree that I see most vendors asking for crazy requirements when offering products to run in VM…. and that looks like that kills that idea itself of having a virtualization environment because that VM looks like requires a dedicated server…. So right-sizing your product/VM is very important. I agree with the statement that vendors dont really do load testing for their VM offering and the high requirements it is an excuse to “avoid” problems from customers.

CCNA DevNet Notes

1) Python Requests status code checks:

r.status_code == requests.codes.ok

2) Docker publish ports:

$ docker run -p 127.0.0.1:80:8080/tcp ubuntu bash

This binds port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine. You can also specify udp and sctp ports. The Docker User Guide explains in detail how to manipulate ports in Docker.

3) HTTP status codes:

1xx informational
2xx Successful
 201 created
 204 no content (post received by server)
3xx Redirect
 301 moved permanently - future requests should be directed to the given URI
 302 found - requested resource resides temporally under a different URI
 304 not modified
4xx Client Error
 400 bad request
 401 unauthorized (user not authenticated or failed)
 403 forbidden (need permissions)
 404 not found
5xx Server Error
 500 internal server err - generic error message
 501 not implemented
 503 service unavailable

4) Python dictionary filters:

my_dict = {8:'u',4:'t',9:'z',10:'j',5:'k',3:'s'}

# filter(function,iterables)
new_dict = dict(filter(lambda val: val[0] % 3 == 0, my_dict.items()))

print("Filter dictionary:",new_filt)

5) HTTP Authentication

Basic: For "Basic" authentication the credentials are constructed by first combining the username and the password with a colon (aladdin:opensesame), and then by encoding the resulting string in base64 (YWxhZGRpbjpvcGVuc2VzYW1l).

Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l

---
auth_type = 'Basic'
creds = '{}:{}'.format(user,pass)
creds_b64 = base64.b64encode(creds)
header = {'Authorization': '{}{}'.format(auth_type,creds_b64)}

Bearer:

Authorization: Bearer <TOKEN>

6) “diff -u file1.txt file2.txt”. link1 link2

The unified format is an option you can add to display output without any redundant context lines

$ diff -u file1.txt file2.txt                                                                                                            
--- file1.txt   2018-01-11 10:39:38.237464052 +0000                                                                                              
+++ file2.txt   2018-01-11 10:40:00.323423021 +0000                                                                                              
@@ -1,4 +1,4 @@                                                                                                                                  
 cat                                                                                                                                             
-mv                                                                                                                                              
-comm                                                                                                                                            
 cp                                                                                                                                              
+diff                                                                                                                                            
+comm
  • The first file is indicated by —
  • The second file is indicated by +++
  • The first two lines of this output show us information about file 1 and file 2. It lists the file name, modification date, and modification time of each of our files, one per line. 
  • The lines below display the content of the files and how to modify file1.txt to make it identical to file2.txt.
  • - (minus) – it needs to be deleted from the first file.
    + (plus) – it needs to be added to the first file.
  • The next line has two at sign @ followed by a line range from the first file (in our case lines 1 through 4, separated by a comma) prefixed by “-“ and then space and then again followed by a line range from the second file prefixed by “+” and at the end two at sign @. Followed by the file content in output tells us which line remain unchanged and which lines needs to added or deleted(indicated by symbols) in the file 1 to make it identical to file 2

7) Python Testing: Assertions

.assertEqual(a, b)	a == b
.assertTrue(x)	        bool(x) is True
.assertFalse(x)	        bool(x) is False
.assertIs(a, b)	        a is b
.assertIsNone(x)	x is None
.assertIn(a, b)	        a in b
.assertIsInstance(a, b)	isinstance(a, b)

*** .assertIs(), .assertIsNone(), .assertIn(), and .assertIsInstance() all have opposite methods, named .assertIsNot(), and so forth.

ARP Storms – EVPN

We have had an issue with broadcast storms in our network. Checking the CoPP setup in the switches, we could see massive drops of ARP. This is a good link to know how to check CoPP drops in NXOS.

N9K:# show copp status
N9K# show policy-map interface control-plane | grep 'dropped [1-9]' | diff

Having so many ARP drops by CoPP is bad because very likely good ARP requests are going to be dropped.

Initially i thought it was related to ARP problems in EVPN like this link. But after taking a packet capture in a switch from an interface connected to a server, I could see that over 90% ARP traffic coming from the server was not getting a reply…. Checking in different switches, I could see the same pattern all over the place.

So why the server was making so many ARP requests?

After some time, managed to help help from a sysadmin with access to the servers so could troubleshoot the problem.

But, how do you find the process that is triggering the ARP requests? I didnt make the effort to think about it and started to search for an easy answer. This post gave me a clue.

ss does show you connections that have not yet been resolved by arp. They are in state SYN-SENT. The problem is that such a state is only held for a few seconds then the connection fails, so you may not see it. You could try rapid polling for it with

while ! ss -p state syn-sent | grep 1.1.1.100; do sleep .1; done

Somehow I couldnt see anything anything with “ss” so tried netstat as it shows you too the status of the TCP connection (I wonder what would happen is the connection was UDP instead???)

Initially I tried “netstat -a” and it was too slow to show me “SYN-SENT” status

Shame on me, I had to search how to get to show the ports quickly here:

watch netstat -ntup | grep -i syn_sent | awk '{print $4,$5,$6,$7}'

It was slow because it was trying to resolve all IPs to hostname…. :facepalm. Tha is fixed with “-n” (no-resolve)

Anyway, with the command above, finally managed to see the process that were in “SYN_SENT” state

This is not the real thing, just an example:

#  netstat -ntup | grep -i syn_sent 
tcp        0      1 192.168.1.203:35460     4.4.4.4:23              SYN_SENT    98690/telnet        
# 

We could see that the destination port was TCP 179, so something in the node was trying to talk BGP! They were “bird” processes. As the node belonged to a kubernetes cluster, we could see a calico container as CNI. Then we connected to the container and tried to check the bird config. We could see clearly the IPs that dont get ARP reply were configured there.

So in summary, basic TCP:

Very summarize, TCP is L4, then goes down to L3 IP. For getting to L2, you need to know the MAC of the IP, so that triggers the ARP request. Once the MAC is learned, it is cached for the next request. For that reason the first time you make a connection is slow (ping, traceroute, etc)

Now we need to workout why the calico/bird config is that way. Fix it to only use IPs of real BGP speakers and then verify the ARP storms stop.

Hopefully, I will learn a bit about calico.

Notes for UDP:

If I generate an UDP connection to a non-existing IP

$ nc -u 4.4.4.4 4000

netstat tells me the UDP connection is established and I can’t see anything in the ARP table for an external IP, for an internal IP (in my own network) I can see an incomplete entry. Why?

#  netstat -ntup | grep -i 4.4.4.4
udp        0      0 192.168.1.203:42653     4.4.4.4:4000            ESTABLISHED 102014/nc           
# 
#  netstat -ntup | grep -i '192.168.1.2:'
udp        0      0 192.168.1.203:44576     192.168.1.2:4000        ESTABLISHED 102369/nc           
# 
#
# arp -a
? (192.168.1.2) at <incomplete> on wlp2s0
something.mynet (192.168.1.1) at xx:xx:xx:yy:yy:zz [ether] on wlp2s0
# 

# tcpdump -i wlp2s0 host 4.4.4.4
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on wlp2s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
23:35:45.081819 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:45.081850 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:46.082075 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:47.082294 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:48.082504 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
^C
5 packets captured
5 packets received by filter
0 packets dropped by kernel
# 
  • UDP is stateless so we can’t have states…. so it is always going to be “established”. Basic TCP/UDP
  • When trying to open an UDP connection to an external IP, you need to “route” so my laptop knows it needs to send the UDP connection to the default gateway, so when getting to L2, the destination MAC address is not 4.4.4.4 is the default gateway MAC. BASIC ROUTING !!!! For that reason you dont see 4.4.4.4 in ARP table
    • When trying to open an UDP connection to a local IP, my laptop knows it is in the same network so it should be able to find the destination MAC address using ARP.

TCP Asymmetric

I got escalated an issue recently that had caused several outages and needed an urgent fix.

For different reasons, we had asymmetric routing in SITE-A. The normal flow is the green arrow. During the asymmetric routing, the flow is the red line. Routing wise, things should work. BUT, we have firewalls in the path. The firewalls were configured to allow asymmetric connections (I was told). As far as I could see in the config and logs, nothing was dropped in the firewalls during the issue.

So first thing, I fixed the asymmetric routing so it didnt happen again. I took me a while to come up with the solution (and it was quite simple) as I had to understand properly the routing before and during the issue. The diagram is quite simplified at the end of the day.

So during the maintenance window when I applied the fix for the asymmetric routing, I managed to take some traces in the firewalls, as I was trying to understand where the traffic was dropped/lost during the asymmetric scenario. As well, I was not very familiar with several parts of the network and the monitoring, I didnt know which links where already tapped or not. Once I was happy with the routing fix, I tried to take a look at the traces. At high level, I could see the return traffic leaving FW1 and leaving DC1-SW1. Based on that, I started to think that the firewalls were fine…..

In another maintenance, I tried to take more logs in different part of the network and I could see clearly the traffic reaching A-SW1. As I ran of time and missed to tap some links, I couldnt carry on.

So based on the second maintenance, the issue had to be inside SITE-A. Somehow it didnt make sense. I checked I didnt have uRPF enabled. The rest was pure L2 so it couldnt see the L3…

So in the third maintenance, I got all my debugging tools to verify that any network kit was dropping the traffic in SITE-A…. and it was useless. I realized that I could do a tcpdump in the client IP1 i was using for testing and I could see some return traffic!!!!

So, I was just socked. I didnt get it. It didnt make sense.

Somehow, I reviewed the tcp captures I was doing in each interface of both firewalls. I was trying to get to basics.

I was assuming the TCP handshake was completed properly. After paying a bit of attention to the client logs… I could see the TCP handshake completed. And I could see the HTTP GET getting to and leaving DC2-FW…. so why the server IP2 was not answering!!!!???

So back to the tcp handshake and firewall captures, I was comparing step by step. Somehow, I missed that the TCP ACK from client IP2 was reaching DC2-FW…. but it was not leaving DC2-FW!!!! even worse, the HTTP GET it was actually crossing the DC2-FW !!!

SLAP IN THE FACE!!!

This is the TCP handshake. This is networking 101…..

The TCP state-machine in client and server during the asymmetric scenario

So I was asumming that because the client was sending HTTP get, the tcp handshake was completed in both ends!!!!

It didnt make sense why I was seeing TCP SYN-ACK retransmissions from the server IP1…. BECAUSE the TCP ACK from client IP2 never reached.

For that reason server IP2 never answered the HTTP GET, because from its end the tcp hanshake was not completed.

I banged my head several times on the table. I “saw” this during the first maintenance window when I took the tcpdump in the firewalls BUT I didnt pay attention to the basic details.

I trusted too much to see a wireshark trace because it is more visual and shows more info but the clues were all the time in the tcpdump from the firewalls that I didnt bother to pay full attention.

At least, I found out where and why the connections failed during the asymmetric routing scenario. A firewall upgrade did the job.

So all fixed.

Lessons learned:

  • without proper foundation, you can’t build knowledge (tcp handshake state in client and server)
  • when things dont make sense, get back to basics (tcp handshake)
  • get the most of the tools at hand (tcpdump – PSH packets were the HTTP GET !!!!)

Smallest Audience – TCPLS – ByPass CDN WAF – Packet Generator

A bit of mix of things:

Smallest (viable) audience: Specificity is the way

TCPLS: I know about QUIC (just the big picture) but this TCP+TLS implementation looks interesting. Although I am not sure if their test is that meaningful. A more “real” life example would be ideal (packet loss, jitter, etc)

ByPass CDN: I am not well versed in Cloud services but this looks like a interesting article CDN and WAF from a security perspective. It is the typical example of thinking out of the box, why the attacker can’t be a “customer” of the CDN too???

Packet Generator – BNG Blaster: I knew about TReX but never had the chance to use it and I know how expensive are the commercial solutions (shocking!) so this looks like a nice tool.

BiDir + SR4

This week at work I have to deal with a couple of physical connections that were new for me. I am more familiar with LC connector and MTP is still something I am dealing with. So I learned about BiDir optics (LC) something never heard about it and the difference with SR4 optics (MTP). The vantage of using BiDir is that you can keep your current fiber install in place and upgrade from 10G to 40G without extra cost. But with SR4 you need to deploy MTP/MTO cabling although you can do breakouts (40G to4x10G or 100G to 4x25G). This is the blog entry I found quite useful for comparing both types. And this one just for BiDir.

As well, somewhere I read (can’t find the link), SR4 are more common than BiDir because is BiDir is proprietary so it has compatibility issues

This is a good link about cables and connectors.

SSD + Breakout Cables

I am not as up to speed with storage as I would like to but this blog gives you a quick intro for the different types of SSD.

As well, this week struggled a bit with a 100G port that had to connect as 4x25G to some servers. It was a bit of time since I dealt with something similar. Initially I though I had to wait for the servers to power up so once the link in the other side comes up and the switch recognizes the port as 25G and creates the necessary interfaces. But I had eth1/1 only. I was expecting eth1/1, eth1/2, eth1/3 and eth1/4… The three extra interfaces were created ONLY after I “forced” the speed setting to “25G” in eth1/1….. I hope I remember this for next time.

Containerlab

Some months ago I read about containerlab (and here). It looked like a very simple way to build labs quickly, easily and multi-vendor. I have used in the past gns3 and docker-topo for my labs but somehow I liked the documentation and the idea to try to mix cEOS with FRR images.

As I have felt more comfortable with the years with Arista and I had some images in my laptop, I installed the software (no mayor issues following the instructions for Debian) and try the example for a cEOS lab.

It didnt work. The containers started but I didnt get to the Arista CLI, just bash CLI and couldnt see anything runing on them… I remembered some Arista specific processes but none was there. In the following weeks, I tried newer cEOS but no luck always stuck in the same point. But at the end, never had enough time (or put the effort and interest) to troubleshoot the problem properly.

For too many months, I havent had the chance (I can write a post with excuses) to do much tech self-learning (I can write a book of all things I would like to learn), it was easier cooking or reading.

But finally, this week, talking with a colleague at work, he mentioned containerlab was great and he used it. I commented that I tried and failed. With that, I finally find a bit of interest and time today to give another go.

Firstly, I made sure I was running the latest containerlab version and my cEOS was recent enough (4.26.0F) and get to basics, check T-H-E logs!

So one thing I noticed after paying attention to the startup logs, I could see an warning about lack of memory in my laptop. So I closed several applications and tried again. My lab looked stuck in the same point:

go:1.16.3|py:3.7.3|tomas@athens:~/storage/technology/containerlabs/ceos$ sudo containerlab deploy --topo ceos-lab1.yaml 
INFO[0000] Parsing & checking topology file: ceos-lab1.yaml 
INFO[0000] Creating lab directory: /home/tomas/storage/technology/containerlabs/ceos/clab-ceos 
INFO[0000] Creating docker network: Name='clab', IPv4Subnet='172.20.20.0/24', IPv6Subnet='2001:172:20:20::/64', MTU='1500' 
INFO[0000] config file '/home/tomas/storage/technology/containerlabs/ceos/clab-ceos/ceos1/flash/startup-config' for node 'ceos1' already exists and will not be generated/reset 
INFO[0000] Creating container: ceos1                    
INFO[0000] config file '/home/tomas/storage/technology/containerlabs/ceos/clab-ceos/ceos2/flash/startup-config' for node 'ceos2' already exists and will not be generated/reset 
INFO[0000] Creating container: ceos2                    
INFO[0003] Creating virtual wire: ceos1:eth1 <--> ceos2:eth1 
INFO[0003] Running postdeploy actions for Arista cEOS 'ceos2' node 
INFO[0003] Running postdeploy actions for Arista cEOS 'ceos1' node 

I did a bit of searching about containerlab and ceos, for example, I could see this blog where the author started up successfully a lab with cEOS and I could see his logs!

So it was clear, my containers were stuck. So I searched for that message “Running postdeploy actions for Arista cEOS”.

I didnt see anything promising, just links back to the main container lab ceos page. I read it again and I noticed something in the bottom of the page regarding a known issue…. So I checked if that applied to me (although I doubted as it looked like it was for CentOS…) and indeed it applied to me too!

$ docker logs clab-ceos-ceos2
Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted

So I started to find info about what is cgroup: link1, link2

First I wanted to check what cgroup version I was running. With this link, I could see that based on my kernel version, I should have cgroup2:

$ grep cgroup /proc/filesystems
nodev	cgroup
nodev	cgroup2

$ ls /sys/fs/cgroup/memory/
cgroup.clone_children           memory.kmem.tcp.limit_in_bytes      memory.stat
cgroup.event_control            memory.kmem.tcp.max_usage_in_bytes  memory.swappiness
cgroup.procs                    memory.kmem.tcp.usage_in_bytes      memory.usage_in_bytes
cgroup.sane_behavior            memory.kmem.usage_in_bytes          memory.use_hierarchy
dev-hugepages.mount             memory.limit_in_bytes               notify_on_release
dev-mqueue.mount                memory.max_usage_in_bytes           proc-fs-nfsd.mount
docker                          memory.memsw.failcnt                proc-sys-fs-binfmt_misc.mount
machine.slice                   memory.memsw.limit_in_bytes         release_agent
memory.failcnt                  memory.memsw.max_usage_in_bytes     sys-fs-fuse-connections.mount
memory.force_empty              memory.memsw.usage_in_bytes         sys-kernel-config.mount
memory.kmem.failcnt             memory.move_charge_at_immigrate     sys-kernel-debug.mount
memory.kmem.limit_in_bytes      memory.numa_stat                    sys-kernel-tracing.mount
memory.kmem.max_usage_in_bytes  memory.oom_control                  system.slice
memory.kmem.slabinfo            memory.pressure_level               tasks
memory.kmem.tcp.failcnt         memory.soft_limit_in_bytes          user.slice

As I had “cgroup.*” in my “/sys/fs/cgroup/memory” it was confirmed I was running cgroup2.

So how could I change to cgroup1 for docker only?

It seems that I couldnt change that only for an application because it is parameter that you pass to the kernel in boot time.

I learned that there is something called podman to replace docker in this blog.

So at the end, searching how to change cgroup in Debian, I used this link:

$ cat /etc/default/grub
...
# systemd.unified_cgroup_hierarchy=0 enables cgroupv1 that is needed for containerlabs to run ceos.... 
# https://github.com/srl-labs/containerlab/issues/467
# https://mbien.dev/blog/entry/java-in-rootless-containers-with
GRUB_CMDLINE_LINUX_DEFAULT="quiet systemd.unified_cgroup_hierarchy=0"
....
$ sudo grub-mkconfig -o /boot/grub/grub.cfg
....
$ sudo reboot.

Good thing that the laptop rebooted fine! That was a relief 🙂

Then I checked if the change made any difference. It failed but because it containerlab couldnt connect to docker… somehow docker had died. I restarted again docker and tried container lab…

$ sudo containerlab deploy --topo ceos-lab1.yaml 
INFO[0000] Parsing & checking topology file: ceos-lab1.yaml 
INFO[0000] Creating lab directory: /home/xxx/storage/technology/containerlabs/ceos/clab-ceos 
INFO[0000] Creating docker network: Name='clab', IPv4Subnet='172.20.20.0/24', IPv6Subnet='2001:172:20:20::/64', MTU='1500' 
INFO[0000] config file '/home/xxx/storage/technology/containerlabs/ceos/clab-ceos/ceos1/flash/startup-config' for node 'ceos1' already exists and will not be generated/reset 
INFO[0000] Creating container: ceos1                    
INFO[0000] config file '/home/xxx/storage/technology/containerlabs/ceos/clab-ceos/ceos2/flash/startup-config' for node 'ceos2' already exists and will not be generated/reset 
INFO[0000] Creating container: ceos2                    
INFO[0003] Creating virtual wire: ceos1:eth1 <--> ceos2:eth1 
INFO[0003] Running postdeploy actions for Arista cEOS 'ceos2' node 
INFO[0003] Running postdeploy actions for Arista cEOS 'ceos1' node 
INFO[0145] Adding containerlab host entries to /etc/hosts file 
+---+-----------------+--------------+------------------+------+-------+---------+----------------+----------------------+
| # |      Name       | Container ID |      Image       | Kind | Group |  State  |  IPv4 Address  |     IPv6 Address     |
+---+-----------------+--------------+------------------+------+-------+---------+----------------+----------------------+
| 1 | clab-ceos-ceos1 | 2807cd2f689f | ceos-lab:4.26.0F | ceos |       | running | 172.20.20.2/24 | 2001:172:20:20::2/64 |
| 2 | clab-ceos-ceos2 | e5d2aa4578b5 | ceos-lab:4.26.0F | ceos |       | running | 172.20.20.3/24 | 2001:172:20:20::3/64 |
+---+-----------------+--------------+------------------+------+-------+---------+----------------+----------------------+
$ sudo clab graph -t ceos-lab1.yaml 
INFO[0000] Parsing & checking topology file: ceos-lab1.yaml 
INFO[0000] Listening on :50080...       

After a bit, it seems it worked! And learned about an option to show a graph of your topology with “graph”

I checked the ceos container logs

$ docker logs clab-ceos-ceos1
....
[  OK  ] Started SYSV: Eos system init scrip...uns after POST, before ProcMgr).
         Starting Power-On Self Test...
         Starting EOS Warmup Service...
[  OK  ] Started Power-On Self Test.
[  OK  ] Reached target EOS regular mode.
[  OK  ] Started EOS Diagnostic Mode.
[     *] A start job is running for EOS Warmup Service (2min 9s / no limit)Reloading.
$ 
$ docker exec -it clab-ceos-ceos1 Cli
ceos1>
ceos1>enable 
ceos1#show version 
 cEOSLab
Hardware version: 
Serial number: 
Hardware MAC address: 001c.7389.2099
System MAC address: 001c.7389.2099

Software image version: 4.26.0F-21792469.4260F (engineering build)
Architecture: i686
Internal build version: 4.26.0F-21792469.4260F
Internal build ID: c5b41f65-54cd-44b1-b576-b5c48700ee19

cEOS tools version: 1.1
Kernel version: 5.10.0-8-amd64

Uptime: 0 minutes
Total memory: 8049260 kB
Free memory: 2469328 kB

ceos1#
ceos1#show interfaces description 
Interface                      Status         Protocol           Description
Et1                            up             up                 
Ma0                            up             up                 
ceos1#show running-config interfaces ethernet 1
interface Ethernet1
ceos1#

Yes! Finally working!

So now, I dont have excuses to keep learning new things!

BTW, these are the different versions I am using at the moment:

$ uname -a
Linux athens 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux 

$ docker -v
Docker version 20.10.5+dfsg1, build 55c4c88

$ containerlab version

                           _                   _       _     
                 _        (_)                 | |     | |    
 ____ ___  ____ | |_  ____ _ ____   ____  ____| | ____| | _  
/ ___) _ \|  _ \|  _)/ _  | |  _ \ / _  )/ ___) |/ _  | || \ 
( (__| |_|| | | | |_( ( | | | | | ( (/ /| |   | ( ( | | |_) )
\____)___/|_| |_|\___)_||_|_|_| |_|\____)_|   |_|\_||_|____/ 

    version: 0.17.0
     commit: eba1b82
       date: 2021-08-25T09:31:53Z
     source: https://github.com/srl-labs/containerlab
 rel. notes: https://containerlab.srlinux.dev/rn/0.17/

My concern is, how this cgroup1 will affect other applications like kubernetes?

BTW, I have the same issue with containerlab as with docker-topo, when I use “Alt+Home(left arrow)” my laptop leave X-Windows and gets to the tty!!!

BGP AIGP and BGP churn

I have read BGP churn before, I find the meaning and then forget about it until next time.

BGP churn = the rate of routing updates that BGP routers must process.

And I can’t really find a straight definition.

As well, I didnt know the actual meaning of churn was. So it s a machine to produce butter and the amount of customer stopping using a product.


I read at work something about AIGP from a different department. I searched and found that is a new BGP optional non-transitive path attribute and the BGP decision process is updated for that. And this is from 2015!!! So I am 6 years behind…. And there is a RFC7311

Note: BGP routers that do not support the optional non-transitive attributes (e.g. AIGP) must delete such attributes and must not pass them to other BGP peers.

So it seems a good feature if you run a big AS and want BGP to take into account the IGP cost.