Life, Love, Sex, Negative Beliefs, startup regrets, nanog90, Groq LPU, LLM from scratch, ssh3, eBFP BGP, RPKI, TIANHE-3

I hit rock bottom this week. I hope I finally closed one door in my life so I give myself the chance to open others. Made the wrong decision? It is easy when you look back. Do I regret it? The most annoying thing is these are failures so you can’t go back and recover. But I was so bloody newbie!!!…. At least after 5 years…

“For every reason it’s not possible, there are hundreds of people who have faced the same circumstances and succeeded.” Jack Canfield

Head down, crying, cursing, whatever, but forwards. As it has always been.

—-

Somehow managed to list to long videos, something I normally can’t manage (because lack of time, etc)

Negative Beliefs, avoid bitterness, aim for greatness (remarkable things), scape the darkness: Jordan B Peterson with Modern Wisdom: video, podcast.

Find and keep Love: video. 1st Get your shit together. Communication is critical. Be careful with your shopping list….

Good Sex: video. Communicate….

Orgasm: video. Haven’t seen it completely yet but very interesting. Use your tongue wisely.

— Other things:

Startup decisions and regrets: page. Interesting. I think most of things are very specific but still good to read.

Nanog90: agenda I didnt want the videos but I reviewed several pdfs and these ones look interesting:

Abstract Ponderings: A ten-year retrospective. Rob Shakir – Google: video

https://rob.sh/post/reimagining-network-devices/
https://rob.sh/post/coaching/
https://cdn.rob.sh/files/the-next-spring-forward_2018.pdf
https://research.google/research-areas/networking/

AI Data Center networks – Juniper – video

Using gNOI capabilities to simplify software upgrade use case: video – I had to idea about gNOI so looks interesting. It is crazy that still in XXI, automating a network device is so painful. Thanks to all vendors to make your life miserable.

Go lang for network engineers: video – I always thought that Golang had a massive potential for network automation but there was always lack of support and python is the king. So nice to see that Arista has things to offer.

PTP in Meta: video and blog.

There are more things, but havent had the chance to review them.

—-

It looks there is new chatbot that is not using the standard NVIDIA GPU. Groq uses LPU (Language Processing Unit). And they say it is better than a GPU. They have this paper but I can’t really see feature of that LPU.

Slurp’it: Show this blog, and the product looks interesting but although is free, it is not opensource and at the end of they you dont want a new vendor-lockin

Container lab in kubernetes: Clabernetes. I would like to play with this one day.

NetDev0x17: videos and sessions. link This is quite low details and most of the time beyond my knowledge. Again, something to take a look at some point.

LLM from scratch: repo. Looks very interesting. But the book it is going to take a long time to hit the market.

ssh3: repo. Interesting experiment.

eBFP and BGP: blog. Really interesting. Another thing that always wanted to play with.

Orange RPKI: old news but still interesting to see how much damaged can cause RPKI in the wrong hands…

China TIANHE-3 Supercomputer: Very interesting. Link.

AusNOG 2023

Nice NOG meeting:

Vendor Support API: Interesting how Telstra uses Juniper TAC API to handle power supplies replacement. I was surprised that they are able to get the RMA and just try to replace it. If they dont need it, they send it back… That saves time to Telstra for sure. The problem I can see here is when you need to open ticket for inbound/outbound deliveries in the datacenters, that dont have any API at all. If datacenters and big courier companies had API as 1st class citizends, incredible things could happens. Still just being able to have zero-touch replacement for power supplies is a start.

No Packet Behind – AWS: I think until pass the first 30 minutes, there is nothing new that hasnt been published in other NOG meeting between 2022 and 2023. At least the mention the name of the latest fabric, Final Cat. As well, they mention issues with IPv6 deployment.

There are other interesting talks but without video so the pdf only doesnt really give me much (like the AWS live premium talk)

Curl, Yaml, scalars, Elixir, git stash

I haven’t watched this video, but looks like the holly book of curl!!!

I'd recommend starting at ~34 minutes.

·You can specify multiple URLS with multiple output options in a single command. Doing this or using globbing (see below) to the same host will use persistent connections and greatly improve performance because the same L5 session is used

·trurl is also made by the project and allows you to programmatically manipulate URLs (change server, path, query parameters, etc.). Pretty neat: https://github.com/curl/trurl

·curl supports URL globbing: curl https://{ftp,www,test}.example.com/img[1-22].jpg -o "foo_#2_#1.jpg"

·By default, curl will resolve requests serially when multiple URLS or globbing is specified, but curl is capable of doing parallel transfers with the -Z or --parallel option. And can do anywhere from 2-300 transfers in parallel. This also has the potential to parallel-ize HTTP/3 transfers even from single URLs.

·You can do curl --help category to get a list of help categories for narrowing down options by categories like http or output

· Long commands for curl can be specified in a file and given to curl either via stdin or -K / --config - These files are essentially just command lines in a file

·You can use the --trace option to provide tcpdump type output from curl. Saving the need to to start tcpdump in the background if you just want to see what's happening from curl

·You can use --connect-to to specify a different DNS name to go to (instead of the one specified in the URL) which is similar to the --resolve option, but doesn't require the user to lookup the IP address ahead of time

·You can override the DNS server that you use to resolve URLs via --dns-ipv4-addr 8.8.8.8 for example

·You can add --libcurl to any curl command and it will spit out C source-code that implements the same command line in C via the library libcurl

·You can set the environment variable SSLKEYLOGFILE to a file name and it will save the runtime TLS secrets to that file, and use that file in WireShark along with a dump of the traffic from tcpdump to see the contents of encrypted HTTP streams

·You can choose to only download files that have changed since the last time they were downloaded with curl via --etag-save <etag_file> and --etag-compare <etag_file>

·You can skip adding the extra -H "Content-Type: application/json" when getting or posting JSON data (with -d), by specifying --json instead of just -d

·You can create JSON easily from the command line with the tool jo: https://github.com/jpmens/jo (basically a reverse jq)

Rant about yaml. And something I learned about yaml some months ago and forgot about it: scalars for making multiline work in yaml.

Elixir: a programming language based on Erlang. Really impressive reports! But still I would like to learn golang (if I ever learn properly python 🙂

git stash: I didnt know about this git command until last week, very handy.

OpenAI / ChatGPT

I guess this is a bit old news already. But I wanted to test it for some weeks. I have just made one question so far (and it failed a couple of times until I got an answer.

And my very first question was to list the most famous quotes from Aristotle to his actual writings. Why? Because you read many quotes from many famous people that are not actually true and it is very hard to really track if you dont read all books from that person. And to be honest, philosophy is not easy to digest.

Here are a few famous quotes from Aristotle that reference his actual writings:

"Happiness is the meaning and the purpose of life, the whole aim and end of human existence." - Nicomachean Ethics

"It is the mark of an educated mind to be able to entertain a thought without accepting it." - Metaphysics

"We are what we repeatedly do. Excellence, then, is not an act, but a habit." - Nicomachean Ethics

"The whole is greater than the sum of its parts." - Metaphysics

"Nature does nothing in vain." - Politics

Please note that these quotes should not be taken as direct verbatim from Aristotle's work but rather as a representation of his thoughts.

Apart from that, I wonder how is the infrastructure of this company? Is on premise or cloud? How is their DC network design? TCP? Checking one of the open positions it seems they have a pretty big cluster. Although they mention cloud platforms. And the company has some kind of agreement with MS (so Azure is going to be used)

IPv6 BIG TCP / Replace TCP in DC: Homa

This week a colleague pass this link about running kubernetes cluster running on Cilium. The interesting point is the high throughput is achieved by BIG TCP and IPv6!

The summary (copied) is:

TCP segments in the OS are up to 65K, NIC hardware does the segmentation – we do this now, but the 65K is a limitation of IPv4 addressing.  BIG TCP uses IPv6 and allows much large TCP segments within OS currently 512K but theoretically higher.  End result – better perf (>20% higher in this video) and latency (2.2x faster through the OS).

Then I saw this other video from John Ousterhout. It is similar topic as the Kubernetes video above as K8S is used mainly in datacenters.

High performance:
– data throughput: full link speed for large messages
– low tail latency: <10us for short messages? (DC)
– message throughput: 100M short messages per second? (DC)

TCP issues in DC:
1- stream oriented (no load balancing) -> message based
2- connection oriented (can break infiniband!, expensive,)-> connectionless
3- fair scheduling (bw sharing) -> run to completion (SRPT)
4- sender-driven congestion control (based on buffer occupancy) -> receiver- driven congestion control
5- in-order delivery -> no ordering requirements

As well, it is important the move to NIC (as there is already a lot of NIC offloading).

His proposal for HOMA looks very nice but I like how he explains how dificult is going to be successful. Still worth trying.

CCNA DevNet Notes

1) Python Requests status code checks:

r.status_code == requests.codes.ok

2) Docker publish ports:

$ docker run -p 127.0.0.1:80:8080/tcp ubuntu bash

This binds port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine. You can also specify udp and sctp ports. The Docker User Guide explains in detail how to manipulate ports in Docker.

3) HTTP status codes:

1xx informational
2xx Successful
 201 created
 204 no content (post received by server)
3xx Redirect
 301 moved permanently - future requests should be directed to the given URI
 302 found - requested resource resides temporally under a different URI
 304 not modified
4xx Client Error
 400 bad request
 401 unauthorized (user not authenticated or failed)
 403 forbidden (need permissions)
 404 not found
5xx Server Error
 500 internal server err - generic error message
 501 not implemented
 503 service unavailable

4) Python dictionary filters:

my_dict = {8:'u',4:'t',9:'z',10:'j',5:'k',3:'s'}

# filter(function,iterables)
new_dict = dict(filter(lambda val: val[0] % 3 == 0, my_dict.items()))

print("Filter dictionary:",new_filt)

5) HTTP Authentication

Basic: For "Basic" authentication the credentials are constructed by first combining the username and the password with a colon (aladdin:opensesame), and then by encoding the resulting string in base64 (YWxhZGRpbjpvcGVuc2VzYW1l).

Authorization: Basic YWxhZGRpbjpvcGVuc2VzYW1l

---
auth_type = 'Basic'
creds = '{}:{}'.format(user,pass)
creds_b64 = base64.b64encode(creds)
header = {'Authorization': '{}{}'.format(auth_type,creds_b64)}

Bearer:

Authorization: Bearer <TOKEN>

6) “diff -u file1.txt file2.txt”. link1 link2

The unified format is an option you can add to display output without any redundant context lines

$ diff -u file1.txt file2.txt                                                                                                            
--- file1.txt   2018-01-11 10:39:38.237464052 +0000                                                                                              
+++ file2.txt   2018-01-11 10:40:00.323423021 +0000                                                                                              
@@ -1,4 +1,4 @@                                                                                                                                  
 cat                                                                                                                                             
-mv                                                                                                                                              
-comm                                                                                                                                            
 cp                                                                                                                                              
+diff                                                                                                                                            
+comm
  • The first file is indicated by —
  • The second file is indicated by +++
  • The first two lines of this output show us information about file 1 and file 2. It lists the file name, modification date, and modification time of each of our files, one per line. 
  • The lines below display the content of the files and how to modify file1.txt to make it identical to file2.txt.
  • - (minus) – it needs to be deleted from the first file.
    + (plus) – it needs to be added to the first file.
  • The next line has two at sign @ followed by a line range from the first file (in our case lines 1 through 4, separated by a comma) prefixed by “-“ and then space and then again followed by a line range from the second file prefixed by “+” and at the end two at sign @. Followed by the file content in output tells us which line remain unchanged and which lines needs to added or deleted(indicated by symbols) in the file 1 to make it identical to file 2

7) Python Testing: Assertions

.assertEqual(a, b)	a == b
.assertTrue(x)	        bool(x) is True
.assertFalse(x)	        bool(x) is False
.assertIs(a, b)	        a is b
.assertIsNone(x)	x is None
.assertIn(a, b)	        a in b
.assertIsInstance(a, b)	isinstance(a, b)

*** .assertIs(), .assertIsNone(), .assertIn(), and .assertIsInstance() all have opposite methods, named .assertIsNot(), and so forth.

ARP Storms – EVPN

We have had an issue with broadcast storms in our network. Checking the CoPP setup in the switches, we could see massive drops of ARP. This is a good link to know how to check CoPP drops in NXOS.

N9K:# show copp status
N9K# show policy-map interface control-plane | grep 'dropped [1-9]' | diff

Having so many ARP drops by CoPP is bad because very likely good ARP requests are going to be dropped.

Initially i thought it was related to ARP problems in EVPN like this link. But after taking a packet capture in a switch from an interface connected to a server, I could see that over 90% ARP traffic coming from the server was not getting a reply…. Checking in different switches, I could see the same pattern all over the place.

So why the server was making so many ARP requests?

After some time, managed to help help from a sysadmin with access to the servers so could troubleshoot the problem.

But, how do you find the process that is triggering the ARP requests? I didnt make the effort to think about it and started to search for an easy answer. This post gave me a clue.

ss does show you connections that have not yet been resolved by arp. They are in state SYN-SENT. The problem is that such a state is only held for a few seconds then the connection fails, so you may not see it. You could try rapid polling for it with

while ! ss -p state syn-sent | grep 1.1.1.100; do sleep .1; done

Somehow I couldnt see anything anything with “ss” so tried netstat as it shows you too the status of the TCP connection (I wonder what would happen is the connection was UDP instead???)

Initially I tried “netstat -a” and it was too slow to show me “SYN-SENT” status

Shame on me, I had to search how to get to show the ports quickly here:

watch netstat -ntup | grep -i syn_sent | awk '{print $4,$5,$6,$7}'

It was slow because it was trying to resolve all IPs to hostname…. :facepalm. Tha is fixed with “-n” (no-resolve)

Anyway, with the command above, finally managed to see the process that were in “SYN_SENT” state

This is not the real thing, just an example:

#  netstat -ntup | grep -i syn_sent 
tcp        0      1 192.168.1.203:35460     4.4.4.4:23              SYN_SENT    98690/telnet        
# 

We could see that the destination port was TCP 179, so something in the node was trying to talk BGP! They were “bird” processes. As the node belonged to a kubernetes cluster, we could see a calico container as CNI. Then we connected to the container and tried to check the bird config. We could see clearly the IPs that dont get ARP reply were configured there.

So in summary, basic TCP:

Very summarize, TCP is L4, then goes down to L3 IP. For getting to L2, you need to know the MAC of the IP, so that triggers the ARP request. Once the MAC is learned, it is cached for the next request. For that reason the first time you make a connection is slow (ping, traceroute, etc)

Now we need to workout why the calico/bird config is that way. Fix it to only use IPs of real BGP speakers and then verify the ARP storms stop.

Hopefully, I will learn a bit about calico.

Notes for UDP:

If I generate an UDP connection to a non-existing IP

$ nc -u 4.4.4.4 4000

netstat tells me the UDP connection is established and I can’t see anything in the ARP table for an external IP, for an internal IP (in my own network) I can see an incomplete entry. Why?

#  netstat -ntup | grep -i 4.4.4.4
udp        0      0 192.168.1.203:42653     4.4.4.4:4000            ESTABLISHED 102014/nc           
# 
#  netstat -ntup | grep -i '192.168.1.2:'
udp        0      0 192.168.1.203:44576     192.168.1.2:4000        ESTABLISHED 102369/nc           
# 
#
# arp -a
? (192.168.1.2) at <incomplete> on wlp2s0
something.mynet (192.168.1.1) at xx:xx:xx:yy:yy:zz [ether] on wlp2s0
# 

# tcpdump -i wlp2s0 host 4.4.4.4
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on wlp2s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
23:35:45.081819 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:45.081850 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:46.082075 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:47.082294 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:48.082504 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
^C
5 packets captured
5 packets received by filter
0 packets dropped by kernel
# 
  • UDP is stateless so we can’t have states…. so it is always going to be “established”. Basic TCP/UDP
  • When trying to open an UDP connection to an external IP, you need to “route” so my laptop knows it needs to send the UDP connection to the default gateway, so when getting to L2, the destination MAC address is not 4.4.4.4 is the default gateway MAC. BASIC ROUTING !!!! For that reason you dont see 4.4.4.4 in ARP table
    • When trying to open an UDP connection to a local IP, my laptop knows it is in the same network so it should be able to find the destination MAC address using ARP.

The Phoenix Project

I wanted to read this book for some time. I thought it was going to be a technical book but it was a novel and felt like a thriller! and IT thriller if you can believe it. While I was reading it, I felt quite tense at some points, like, “I have been there!”. Although I am not a developer, I felt the pain mentioned in the book. I have been like that I spend many years in a good devops environment. When I started there, I didnt have a clue what devops menat but I learnt on the job training. I wish the networks world could be more “devops” but as we nearly always relay in 3rd party vendors to provide equipment, they always want you to lock in their product. Still, it is possible, but you need to have the drive (and time) and some support from your employer.

One of the things that surprise me from the devops methodology is that is based in manufacturing. I read in the past about Kaizen but now, I can see the connection. One of the main references is the book, The Goal.

And another very important point, nothing of these things work if people are not on board. You can have the smartest people around but if people dont buy in, nothing is accomplished.

So I like the idea of quick iterations (return of investment is received by the company and customer sooner) where you get earlier feedback, interactions and communication between all teams, awareness for the business that IT is everywhere, constant testing/experimentation (chaos monkey, antifragility), kanban boards / flow models to visualize process and constraints (WIP), constant learning, etc.

It was interesting at some point in the book where the main characters where interviewing the top people in the company to gather info about what is important for them and what means successful results and bad days. Then map all that to IT process. From there you can see what is clearly important and what is not. So you can focus in value.

Other things I learned is about the types of work we do:

  • Business projects
  • Internal projects
  • Changes
  • Unplanned work

And that unplanned work is the killer for any attempt to have a process like a manufacturing plant.

As well, based on “The Goal”, there are a lot of mentions about the “Three Ways”:

  • Find your constraint: maximize flow -> reduce batch, reduce intervals, increase quality to detect failures before moving to next steps.
  • Exploit your constraint: fast and constant flow of feedback.
  • Subordinate your constraint: high-trust culture -> dynamic, disciplined and scientific approach to experiment and risks.

In summary, I enjoyed the book. It was engaging, easy to digest and I learned!

Terraform-Part1

After learning about kubernetes from kodekloud. I want to take a look at Terraform.

These are my notes that I am taking along the course.

1- Intro:

A- config mgmt: ansible, puppet, saltstack

Design to install and manage sw

B- Server Templating: docker, packer, vagrant.

Pre install sw and dependencies

vm or docker images

immutable infra

C- Provision tools: terraform, cloudformation

deploy immutable infra resources

servers, dbs, net components

multiple providers.

Terraform is available in AWS, GCP, Azure and physical machines. Multiple providers like cloudflare, paloalto, dns, infoblox, grafana, influxdb, mongodb, etc

It uses a declarative code HCL = HashiCorp Config Language: *.tf

Phases: Init, plan and apply.

2- Install and Basics

I am going to use my laptop initially, so I will follow the official instructions using a precompiled binary. So download the zip file (terraform_0.14.3_linux_amd64.zip), unzip and move the binary somewhere active in your path. I decided to use /usr/bin and install autocompletion.

/terraform/test1$ which terraform
 /usr/bin/terraform

/terraform/test1$ terraform version
 Terraform v0.14.3
 provider registry.terraform.io/hashicorp/local v2.0.0 

/terraform/test1$ terraform -install-autocomplete

HCL Basics:

<block> <parameters> {
  key1 = value1
  key2 = value2
 }

Examples:

// This one use the resource "local_file". We call it "hello". It creates a file with specific content
$ vim local.tf
 resource "local_file" "hello" {
  filename = "/tmp/hello-terra.txt"
  content = "hello world1"
 }

Based on the above:
 block_name -> resource
 provider type -> local
 resource type -> file
 resource_name: hello
   arguments: filename and content


// The next ones use AWS provider types

$ vim aws-ec2.tf
 resource "aws_instance" "webserver" {
  ami = "ami-asdfasdf"
  instance_type = "t2.micro"
 }

$ vim aws-s3.tf
 resource "aws_s3_bucket" "data" {
   bucket = "webserver-bucket-org-2207"
   acl = "private"
 }

Deployment process:

 0- create *.tf file
 1- terraform init --> prepare env / install pluggins, etc
 2- terraform plan --> steps to be done // review
 3- terraform apply -> execute steps from plan
 4- terraform show

Example using “local_file” resource:

/terraform/test1$ terraform init 
 Initializing the backend…
 Initializing provider plugins…
 Reusing previous version of hashicorp/local from the dependency lock file
 Installing hashicorp/local v2.0.0…
 Installed hashicorp/local v2.0.0 (signed by HashiCorp) 
 Terraform has been successfully initialized!
 You may now begin working with Terraform. Try running "terraform plan" to see
 any changes that are required for your infrastructure. All Terraform commands
 should now work.
 If you ever set or change modules or backend configuration for Terraform,
 rerun this command to reinitialize your working directory. If you forget, other
 commands will detect it and remind you to do so if necessary.
/terraform/test1$ 
/terraform/test1$ terraform plan 
 local_file.hello: Refreshing state… [id=c25325615b8492da77c2280a425a3aa82efda6d3]
 An execution plan has been generated and is shown below.
 Resource actions are indicated with the following symbols:
 create 
 Terraform will perform the following actions:
 # local_file.hello will be created
 resource "local_file" "hello" { content              = "hello world1"
 directory_permission = "0777"
 file_permission      = "0700"
 filename             = "/tmp/hello-terra.txt"
 id                   = (known after apply)
 } 
 Plan: 1 to add, 0 to change, 0 to destroy.
 
 Note: You didn't specify an "-out" parameter to save this plan, so Terraform
 can't guarantee that exactly these actions will be performed if
 "terraform apply" is subsequently run.
/terraform/test1$ 
/terraform/test1$ terraform apply 
 local_file.hello: Refreshing state… [id=c25325615b8492da77c2280a425a3aa82efda6d3]
 An execution plan has been generated and is shown below.
 Resource actions are indicated with the following symbols:
 create 
 Terraform will perform the following actions:
 # local_file.hello will be created
 resource "local_file" "hello" { content              = "hello world1"
 directory_permission = "0777"
 file_permission      = "0700"
 filename             = "/tmp/hello-terra.txt"
 id                   = (known after apply)
 } 
 Plan: 1 to add, 0 to change, 0 to destroy.
 Do you want to perform these actions?
   Terraform will perform the actions described above.
   Only 'yes' will be accepted to approve.
 Enter a value: yes
 local_file.hello: Creating…
 local_file.hello: Creation complete after 0s [id=c25325615b8492da77c2280a425a3aa82efda6d3]
 Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
/terraform/test1$ 
/terraform/test1$ cat /tmp/hello-terra.txt 
 hello world1

Update/Destroy:

 $ update tf file
 $ terraform apply   -> apply the changes
or
 $ terraform destroy -> shows the destroy plan and then you need to confirm

Providers:

https://registry.terraform.io/
  oficial: aws, gcp, local, etc
  verified (3rdparty): bigip, heroku, digitalocena
  community: activedirectory, ucloud, netapp-gcps
 
$ terraform init -> show the providers installed

 plugin name format:
  * registry.terraform.io/hashicorp/local
           ^                ^         ^
       hostname      org namespace   type 
 
plugins installed in .terraform/plugins

https://registry.terraform.io/providers/hashicorp/local/latest/docs/resources/file#sensitive_content
 main.tf: resource definition
 variables.tf: variable declarations
 outputs.tf: outouts from resources
 provider.tf: providers definition

Variables:

filename
content
prefix
separator
length

* type is optional
 type: string    "tst"
       number    1
       bool      true/false
       any       whatever
       list      ["cat","dog"]
       map       pet1=cat
       object    mix of the above
       tuple     like a list of types
       set       (it is like a list but can't have duplicate values!) 

Examples:

vim varibles.ttf
// List
variable "prefix" {
  default = ["Mr", "Mrs", "Sir"]   **default is optional!!!
  type = list(string)
 }

// Map
 variable file-content {
  type = map(string)
  default = {
   "state1" = "test1"
   "state2" = "test2"
  }
 }

// Set
 variable "prefix" {
  default = ["10","11","12"]
  type = set(number)
 }

// Object
 variable "bella" {
 type = object({
   name = string
   age = number
   food = list(string)
   alive = bool
  })
 default = {
   name = "bella"
   age = 21
   food = ["pasta", "tuna"]
   alive = true
  }
 }

// Tuple
 variable kitty {
  type = tuple([string, number, bool)]
  default = ["cat", 7, true]
 }

Using variables

vim main.tf
 resource "random_pet" "my-pet" {
  prefix = var.prefix[0]
 }
 resource local_file my-file {
  filename = "/tmp/test1.txt"
  content = var.file-content["state1"]
 }

Example using vars:

/terraform/vars$ cat variables.tf
variable "filename" {
  default = "/tmp/test-var.txt"
  type = string
  description = "xx"
 }
 variable "content" {
  default = "hello test var"
 }
/terraform/vars$ cat main.tf
resource "local_file" "test1" {
  filename = var.filename
  content = var.content
 }
/terraform/vars$ 
/terraform/vars$ terraform init 
 Initializing the backend…
 Initializing provider plugins…
 Finding latest version of hashicorp/local…
 Installing hashicorp/local v2.0.0…
 Installed hashicorp/local v2.0.0 (signed by HashiCorp) 
 Terraform has created a lock file .terraform.lock.hcl to record the provider
 selections it made above. Include this file in your version control repository
 so that Terraform can guarantee to make the same selections by default when
 you run "terraform init" in the future.
 Terraform has been successfully initialized!
 You may now begin working with Terraform. Try running "terraform plan" to see
 any changes that are required for your infrastructure. All Terraform commands
 should now work.
 If you ever set or change modules or backend configuration for Terraform,
 rerun this command to reinitialize your working directory. If you forget, other
 commands will detect it and remind you to do so if necessary.
/terraform/vars$ 
/terraform/vars$ terraform plan
 An execution plan has been generated and is shown below.
 Resource actions are indicated with the following symbols:
 create 
 Terraform will perform the following actions:
 # local_file.test1 will be created
 resource "local_file" "test1" { content              = "hello test var"
 directory_permission = "0777"
 file_permission      = "0777"
 filename             = "/tmp/test-var.txt"
 id                   = (known after apply)
 } 
 Plan: 1 to add, 0 to change, 0 to destroy.
 
 Note: You didn't specify an "-out" parameter to save this plan, so Terraform
 can't guarantee that exactly these actions will be performed if
 "terraform apply" is subsequently run.
/terraform/vars$ 
/terraform/vars$ terraform apply 
 An execution plan has been generated and is shown below.
 Resource actions are indicated with the following symbols:
 create 
 Terraform will perform the following actions:
 # local_file.test1 will be created
 resource "local_file" "test1" { content              = "hello test var"
 directory_permission = "0777"
 file_permission      = "0777"
 filename             = "/tmp/test-var.txt"
 id                   = (known after apply)
 } 
 Plan: 1 to add, 0 to change, 0 to destroy.
 Do you want to perform these actions?
   Terraform will perform the actions described above.
   Only 'yes' will be accepted to approve.
 Enter a value: yes
 local_file.test1: Creating…
 local_file.test1: Creation complete after 0s [id=9f5d7ee95aa30648a2fb6f8e523e0547b7ecb78e]
 Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
/terraform/vars$ 
/terraform/vars$ 
/terraform/vars$ cat /tmp/test-var.txt 
 hello test var

Pass var values:

 1- if there is no values for var, when running "terrafom apply" it will ask for the values interactivily!
 2- cli params
    $ terraform apply -var "filename=/root/test.tst" -var "content=My Test"
 3- env vars  TF_VAR_xxx=xxx
    $ export TF_VAR_filename="/root/test.tst"
    $ terraform apply
 4- var files:
    autoloaded: terraform.tfvars, terraform.tfvars.json, *.auto.tfvars, *.auto.tvars.json
    explicit NAME.tfvars
    $ cat terraform.tfvars
      filename="/root/test.tst"
    $ terraform apply
    $ terraform -var-file NAME.tfvars

VAR PRECEDENCE: less -> more
 1 env vars
 2 terraform.tfvars
 3 *.auto.tfvars (alphabetic order)
 4 -var -r -var-file (cli flags)     --> highest priority!!!! it overrides all above options

Kubernetes-Docker-ASICs

This week I read that kubernetes is going to stop support for Docker soon. I was quite surprised. I am not an expert so it seems they have legit reasons. But I haven’t read anything from the other side. I think it is going to be painful so I need to try that in my lab and see how to do that migration. It has to be nice to learn that.

In the other end, I read a blog entry about ASICs from Cloudflare. I think without getting too technical it is a good one. And I learn about the different type of ASICs from Juniper. In the last years, I have only used devices powered by Broadcom ASICs. One day, I would like to try that P4/Barefoot Tofino devices. And related to this, I remember this NANOG presentation about ASICs that is really good (and fun!).