Intro LLM, LLM bootcamp, Computex 2024, UALink, Aurora, Arista AI Center, Kubenet, Nutrigenomia, Videos

Intro LLM

LLM Bootcamp 2023:

NVIDIA Computex 2024: It seems they are going to yearly cadence for networking kit. They showed plans for 2025 and 2026… I liked the picture of a NVLink spine and the huge heatsinks for B200….

UALink: The competition for NVLink. This is for GPU-to-GPU communication. UltraEthernet is for connecting pods.

Aurora supercomputer: Exascale broken. Based on HPE slingshot interconnect (nearly 85k endpoints) Everything else is Intel.

Arista AI Center: it seems they are going to team-up with NVIDIA. Some EOS running on the nics.

Kubenet: Seems interesting but only supporting Nokia SRLinux at the moment.

Nutrigenomia:

“Lo que hicimos fue un trabajo personalizado en el que cuidamos todos los aspectos de la nutrición y buscamos la regeneración y la correcta expresión de sus genes.”

fisiogenómica: Yo lo llamo así porque mezcla fisioterapia, nutrición y nutrigenómica. En cada persona tenemos que buscar por síntomas, análisis e intervenciones qué alimentos limitar por producir una mala expresión genética, pero todas las pautas están basadas en la Pirámide de la Dieta Mediterránea”

Videos:

Bear Grylls: Be kind, never give up.

LLM n C, 1.6nm, xz vul, turing 2024, let’s encrypt, chatdev, Ethernet vs IB, Slingshot, Tailscale ssh, videos, 42 rules, CNI, Cilium

Origins of deep learning: interesting post. At the beginning all was Matlab and CPU bounded. repo

LLM in C: post and repo.

A16: 1.6nm process for 2026. More frequency, less power.

xz vulnerability repo: Something I need to check in the VP

Turing Award 2024: zero-knowledge-proof.

Cloudflare and Let’s Encrypt’s certificate change: I haven’t heard of this until recently. I use Let’s Encrypt so as far as I can read, makes sense what they are doing. But didnt know 2% Cloudflare customer were using the “cert”

ChatDev: Communicate agents for software development. I am a not a developer but I would use this just as a starting point If I have any idea for a project. I would remove the C-suite agents, at least for low level projects.

IB vs Ethernet: A bit of bias here (the author is from Broadcom -> Ethernet). I have no hands-on experience with IB, but I have read the cables are not cheap… Let’s see when UltraEthernet gets into the market. Another view.

Slingshot and Juniper: A bit of bias again as HP bought Juniper. So how will these interconnects fade inside the company? As far as I know, most supercomputers use some “special” interconnect so not much ethernet there. But the money nowadays is in AI infra… Paper for slingshot (haven’t read it)

Tailscale SSH, wireguard throughput: These are things I should a spend a bit of time one day and consider if I should use them (I dont like it is not opensource though). This netmaker?

Videos:

Jocko Willink: Discipline = Freedom. Remember but not dwell. Good leader, delegate. Be a man -> take action, bonding (pick your activity)

Jimmy Carr: Imposter syndrome each 18 months, so you have to stand-up. People crave the success not the journey. Teaching comedy good for communicating.

Sam Altman – Stanford 2024: First time I see him talking. It has some funny moments. More powerful computers. I missed a question about opensource LLM and closed ones.

Find a girlfriend: I know just a little bit about the person (I want to read one of his books) from other books and videos. I would think he would have already a girlfriend or family. From the three methods, definitely, the face to face approach in the street looks so much better (and that’s what I would like to do)

Jordan Peterson original 42 rules

CNI performance: I have used kubernetes since I studied for CKAD but still I am interested in the networks side. I didn’t know about Kube-router and it did great! I am bit surprised with Calico as I have read more and more about Cilium.

Cilium for network engineers. I have to read this fully (worried that Cisco bought it…)

rsync go, NASA SP287, git options, Undersea cable failures in Africa, Quotes, Log4j, done list, Dan Lynch, Systems-based Productivity, Run Africa

rsync go: Interesting talk about rsync, as it explains how it works and it is something I didnt know. But then, all other things/projects mentioned are cool and related. I need to try to install rsync go in my vm. ccc slides and repo

NASA to the moon: This is an engaging and provocative video regarding the Artemis III (project back to the moon II). He makes some hard questions to the people in charge (I have no clue about physics) and it seems he has a point. Not sure it this will get any effect but again, looks “smart”. When he mention the NASA SP287 (What made Apollo a success) document as the grial for going back to the moon, I wanted to get a copy (here) so I could read it one day.

Git options: Nice post about popular git config options. I am a very basic git user (and still sometimes I screw up) but the options to improve diff looks interesting so I will give it a go at work.

Undersea cable failures in Africa: It is clear that Africa relays heavily in submarine cables (it doesnt look like there are many cable systems intra continent). And the Red Sea is becoming a hot area due to different conflicts…

Quotes: I like the ones regarding simplicity:

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system. (John Gall)

In programming, simplicity and clarity are a crucial matter that decides between success and failure. (Edsger Dijktra)

Log4j: This is old news but when it came out I tried to run the PoC but I failed 🙁 This is just a reminder. It was annoying because I manged to install all tools but never managed to exploit it.

Done List: I feel totally identified. The to-do list is never done and you feel guilty. Done-list, much healthier.

Dan Lynch: He passed away, and as usual on my ignorance, it seems he is one of the unsung heroes of Internet, migrating ARPANET to TCP/IP.

Systems-Based Productivity: TEMPO refers to five dimensions of productivity: T (Time Management), E (Energy Management), M (Mindset), P (Proficiency) and O (Organization).

Run Africa: very jealous.

Infraops challenge, Devika, Daytona, NTP 2038, Linux Crisis Tools, videos, Chocolonely, LLM, Transformers, Enforce-first

InfraOps challenge: A bit beyond me, but interesting If you could try without applying for the job.

Devika: Agent AI. Another thing I would like to have time to play with it. If you have API keys for some LLMs, looks like it shouldn’t be difficult to run and you dont need a powerful laptop (?)

Daytona: My development environment is a joke, just python envs. But I guess for more serious devs, could be interesting

NTP and year 2038: Agree, when it is not DNS, it is likely NTP (seen this with VPNs and SSL certs in boxes with NTP unsync), or something blocking UDP.

Linux crisis tools: I haven’t got my hands dirty with BPF but I am surprised with so many tools. I would add nc, netstat, lsof, traceroute, ping, vim, openssl etc but because I do pure networks.

Jim Kwik: How to improve your reading speed. One improvement is you use your finger or a ruler. Need to watch again.

Rich Roll: The guy is super chill. I would like to be able to do some ultra at some point in life… Very personal conversation.

Ferran Adria: I didnt know much about the person apart from being one of the best Chefs in history. I like how he starts the interview and take over for 15 minutes. Haven’t watched till the end. But just the beginning is priceless.

Mark Manson: I have read all his books and his emails. Interesting his story.

Chocolonely: I didnt know it was a dutch company and interesting history behind. I want to try one day, but I haven’t found a dark choco version.

LLM in 1000 lines puce C: I was always crap at C. But interesting this project as something educational and intro in LLM.

Visual intro to transformers: The easy joke, unfortunately, this is not about Optimus Prime.

Indonesia Heavy Metal Girls: Unexpected. Respect.

Enforce-first-as: I dint know about this until last week. Cisco defined by default. Juniper disabled by default. And this makes sense with Route Servers.

GPU Fabrics, Optimizations, Network Acceleration, Learning Cambridge, British Library

Several posts worth reading. There are plenty of things go over my knowledge. I already posted this, it is a good refresher.

GPU Fabrics: The first of the article is the one I am more lost as it about training and the communications between the GPU depending on the take to handle the models. There are several references to improvements as the use of FP8 and different topologies. As well, a bit more clear about NVLink (as internal switch for connecting GPUs inside the same server or rack)

When it moved to the inter-server traffic, I started to understand a bit more things like “rail-optimized” (it is like having a “plane” for my old job where the leaf only connects to a spine instead of all spines, in this case each GPU connects to just one leaf. If you cluster is bigger then you need spines). I am not keen of modular chassis from operations point of view but it is mentioned as an option. Fat-tree CLOS, Dragon-Fly: reminds me to Infiniband. Like all RDMA.

And Fabric congestion it is a big topic with many different approaches: adaptive LB (IB again), several congestion control protocols and mention to Google (CSIG) and Amazon (SDR) implementations.

In general I liked the article because I dont really feel any bias (she works for Juniper) and it is very open with the solutions from different players.

LLM Inference – HW/SW Optimizations: It is interesting the explanation about LLM inferencing (doubt I can’t explain it though) and all different optimizations. The hw optimization (different custom hw solutions vs GPU) section was a bit more familiar. My summary is you dont need the same infrastructure (and cost) for doing inference and there is an interest for companies to own that as it should be better and cheaper than hosting with somebody else.

Network Acceleration for AI/ML workloads: Nice to have a summary of the different “collectives”. “collectives” refer to a set of operations involving communication among a group of processing nodes (like GPUs) to perform coordinated tasks. For example, NCCL (Nvidia Collective Communication Library) efficiently implements the collective operations designed for their GPU architecture. When a model is partitioned across a set of GPUs, NCCL manages all communication between them. Network switches can help offload some or all of the collective operations. Nvidia supports this in their InfiniBand and NVLink switches using SHARP (Scalable Hierarchical Aggregation and Reduction Protocol – proprietary). This is call “in-network computing”. For Ethernet, there are no standards yet. The Ultra Ethernet Consortium is working on it but will take years until something is seen in production. And Juniper has the programmable architecture Trio (MX routers – paper) that can do this offloading (You need to program it though – language similar to C). Still this is not a perfect solution (using a switches). The usage of collectives in inference is less common than their extensive use during the training phase of deep learning models. This is primarily because inference tasks can often be executed on a single GPU

From a different topics:

Learning at Cambridge: Spend less hours studying, dont take notes (that’s hard for me), go wild with active learning (work in exercises until you fully understand them)

British Library CyberAttack: blog and public learning lesson. I know this is happening to often for many different institutions but this one caught my eye 🙁 I think is a recurrent theme in most government institutions were upgrading is expensive (because it is not done often), tight budgets and IT experts.

“Our major software systems cannot be brought back in their pre-attack form, either because they are no longer supported by the vendor or because they will not function on the new secure infrastructure that is currently being rolled out”

However, the first detected unauthorised access to our network was identified at the Terminal Services server. Likely a compromised account.

Personally, I wonder what you can get from “stealing” in a library ???

Google Networking, AI Cooling, MATx

OpenFlow at Google – 2012: Openflow to manage to network, to simulate your network. 2 backbones: first for customer traffic and second for inter-DC traffic

UKNOF32 – Google Datacenter networking 2015: Evolution until Jupiter. Moving from chassis based solutions to pizza boxes. Smaller blast radius than a chassis. This switches have small buffers but Google uses ECN (QoS) for dealing with it.

Google DC Network via Optical Circuit 2022: (other video paper google post) Adding optical circuit switches, no more Clos network !!! Full mesh connection of aggregation blocks. Spines are expensive and bottlenecks. Traffic flows are predictable at large scale. Not building for worse scenario. Drawback: complex topology and routing control! Shortest path routing is insufficient. TE: variable hedging allows operation on different points along the continuum to tradeoff optimality under correct prediction vs robustness under misprediction -> no more spikes. Hitless topology reconfig. It seems it has been running already for 5y…. To be honest, It goes a bit… beyond my knowledge.

Google TPUv4 + Optical reconfigurable AI Network 2023: Based on the above but for AI at scale. Although there is already TPUv5. From this page, the pictures help to get a view of the connectivity. Still complex though.

Open Computer Project 2023: AI Datacenter – Mainly about how to cool down the AI infra with some much requirement of GPU/power.

MATx: A new company to design hw for AI models

Love Languages, imposter syndrome, self-compasion, GTC-2024, Juniper Express 5

Love Languages: I read this book in 2018. The conclusion I took at that time (and a bit late…) it is that you have to F*! communicate…

Interesting story about imposter syndrome:

We’d like to believe that if we only had the adulation, market success, and fan support of superstars like these, then we’d finally be comfortable and able to do our best.

In fact, it seems the opposite is true. Imposter syndrome shows up because we are imposters, imposters acting ‘as if’ in search of making something better.

Perhaps the best plan is to show up and not walk out.

Self-compassion: Something I have learnt the hard way, and I think at the beginning works but long term doesn’t. I practice it often while climbing and honestly, I feel the difference, and sometimes is mindblowing. Nobody is going to cheer me up so I better off doing it myself.

GTC-2024: Like last year, I registered to watch some conferences. As a network engineer, I haven’t been able to see any (good) recording, just pdfs…. so quite disappointing. This is a summary from somebody that was on site and says it was great. And some other notes that they look interesting: keynote (nvlink and infiniband at 800G), nvdia dgx gb200 (indeed we need nuclear energy to feed all this…)

Juniper Express 5: Looks quite an interesting ASIC. But as far as I can see most ASICs for DC and AI/ML come from Broadcom and the main players are Cisco/Arista. I like the feature of deep buffers.. this is still a bit of a religious dilema… deep vs shallow buffers. And looks like it was announced in HotChips 2022.. so it is not very new? And only in PTX platform. What is the future of QFX?

Meta GenAI Infra, Oracle RDMA, Cerebras, Co-packaged optics, devin, figure01, summarize youtube videos, pdf linux cli, levulinic acid

Meta GenAI infra: link. Interesting they have built two cluster one Ethernet and the other Infiniband, both without bottlenecks. I don’t understand if Gran Teton is where they install the NVIDIA GPUs? And for storage, I would expect something based on ZFS or similar. For performance, “We also optimized our network routing strategy”. And it is critical the “debuggability” for a system of this size. How quick you can detect a faulty cable, port, gpu, etc?

Oracle RDMA: This is an ethernet deployment with RDMA. The interesting part is the development DC-QCN (some ECN enhancement)

Cerebras WSE-3: Looks like outside NVIDIA and AMD, this is the only other option. I wonder how much you need to change your code to work in this setup? They say it is easier… I like the pictures about the cooling and racks.

Co-packaged optics: Interesting to see if this becomes a new “normal”. No more flapping links anybody? It is the fiber or replace the whole switch….

I have been watching several videos lately and I would like to be able to get a tool to give a quick summary of the video so I can have notes (and check if the tool is good). Some tools: summarize.tech, sumtubeai

video1, video2, video3, video4, video5, video6, video7, video8, video9, video10, video11

Devin and Figure01: Looks amazing and scary. I will need one robot for my dream bakery.

I wanted to “extract” some pages from different pdfs in just one file. “qpdf” looks like the tool for it.

qpdf --empty --pages first.pdf 1-2 second.pdf 1 -- combined.pdf

levulinic acid: I learnt about it from this news.

Sales Psychology, BERT testing, EVPN asymmetric/symmetric, git sync fork

Sales Psychology: I have noticed with myself lately, since I subscribed to a youtube channel, everything is a “negativity bias”. I can’t see any video with a positive message. I subscribed because I want to learn and improve but the publicity is wrong.

BERT Testing: I wonder if there is anything opensource.

Git sync fork. This something I have never tried before

1- Add remote

0) check your remote
git remote -v
1) Add new remote
git remote add upstream URL
2) git fetch/pull from the upstream
git pull upstream

EVPN VXLAN Asymmetric/Symmetric routing: blog1

Asymmetric IRB
– Ingress VTEP does both L2 and L3 lookup
– Egress VTEP does L2 lookup only
– Bridge – Route – Bridge
– Pros: “easy” to configure – just copy/paste. Identical config with the only difference in SVI IP addresses.
– Cons: on the way back, traffic will be reversed => all VXLANs need to be configured on all VTEPs => increased ARP cache and CAM table sizes and control plane scaling issue => not very efficient.

Symmetric IRB
– Ingress VTEP does both L2 and L3 lookup
– Egress VTEp does both L3 and L2 lookup
– Bridge – Route – Route – Bridge
– L3 VNI should be configured on all VTEPS, L2 VNIs only where local ports exist

Other things about EVPN: link1 link2

Gaming Latency, LLM course, Anycast ipv6

Another LLM course: and looks quite good. But dont think I will have time to use it.

Nice video about Gaming Latency:

How to curl an ipv6:

$ curl -v -g -k -6 'https://[2603:1061:13f1:4c06::]:443/'
Trying [2603:1061:13f1:4c06::]:443...
Connected to 2603:1061:13f1:4c06:: (2603:1061:13f1:4c06::) port 443
ALPN: curl offers h2,http/1.1

The destination address is indeed IPv6 anycast: 2603:1061:13f1:4c06:: (notice the “::” at the end)

According to RFC4291 https://www.rfc-editor.org/rfc/rfc4291.html#section-2.6

Image

So it is indeed an anycast address.

According to Cisco (haven’t been able to find the RFC, haven’t looked much), this shouldn’t happen:

https://www.cisco.com/c/en/us/td/docs/ios-xml/ios/ipv6_basic/configuration/xe-3se/5700/ip6-anycast-add-xe.html

Image

So how I can curl and ipv6 anycast address from MS as it were a host??