Infiniband Essentials

NVIDIA provides this course for free. Although I surprised that there is no much “free” documentation about this technology. I wish they follow the same path as most networking vendors where they want you to learn their technology without much barriers. And it is quite pathetic that you can’t really find books about it…

The course is very very high level and very very short. So I didnt become an Infiniband CCIE…

  • Intro to IB

— Elements of IB: IB switch, Subnet Manager (it is like a SDN controller), hosts (clients), adaptors (NICs), gateways (convert IB <> Ethernet) and IB routers.

  • Key features

— Simplify mgmt: because of the Subnet Manager

— High bw: up to 400G)

— Cpu offload: RDMA, bypass OS.

— Ultra low latency: 1us host to host.

— Network scale-out: 48k nodes in a single subnet. You can connect subnets using IB router.

— QoS: achieve loss-less flows.

— Fabric resilience: Fast-ReRouting at switch level takes 1ms compared with 5s using Traffic Manager => Self-Healing

— Optimal load-balancing: using AR (adaptive routing). Rebalance packets and flows.

–MPI super performance (SHARP – scalable hierarchical aggregation and reduction protocol): off-load operations from cpu/gpu to switches -> decrease the retransmissions from end hosts -> less data sent. Dont really understand this.

— Variety of supported topologies: fat-tree, dragonfly+, torus, hypercurve and hyperx.

  • Architecture:

— Similar layers as OSI model: application, transport, network, link and physical.

— In IB, applications connect to NIC, bypass OS.

— Upper layer protocol:

— MPI: Message Passing Interface

— NCCL: NVIDIA Collective Communication Library

— iSEB: RDMA storage protocols.

— IPoIB: IP over IB

— Transport Layer: diff from tcp/ip, it creates an end-to-end virtual channel between applications (source and destination), bypassing OS in both ends.

— Network Layer: This is mainly at IB routers to connect IB subnets. Routers use GID as identifier for source and destinations.

— Link Layer: each node is identified by a LID (local ID), managed by the Subnet Manager. Switch has a forwarding table with “port vs LID” <- generated by Subnet Manager. You have flow-control for providing loss-less connections.

— Physical Layer: Support for copper (DAC) and optical (AOC) connectors.

AI Supercomputer – NVLink

So NVIDIA has an AI supercomputer via this. Meta, Google and MS making comments about it. And based on this, it is a 24 racks setup using 900GBps NVLink-C2C interface, so no ethernet and no infiniband. Here, there is a bit more info about NVLink:

NVLink Switch System forms a two-level, non-blocking, fat-tree NVLink fabric to fully connect 256 Grace Hopper Superchips in a DGX GH200 system. Every GPU in DGX GH200 can access the memory of other GPUs and extended GPU memory of all NVIDIA Grace CPUs at 900 GBps. 

This is the official page for NVlink but only with the above I understood this is like a “new” switching infrastructure.

But looks like if you want to connect up those supercomputers, you need to use infiniband. And again power/cooling is a important subject.

Jamaican Rum Cake

I have been lucky to try some Jamaican Rum Cake brought from Jamaica so I decided if I could make it myself. I found some recipes online like this (my main source) and this.


  • 200g butter at room temperature + a bit for greasing
  • 1 cup of brown sugar
  • 4 eggs
  • 1 tbsp lime juice
  • 1 tbsp lime zest
  • 1 cup of blended fruits: raisins, cherries, mixed fruit, etc. Pre-soak the fruits earlier with water and a bit of white rum.
  • 1 tsp vanilla paste
  • 1 tbsp almond liquor
  • 1/4 cup of white rum + a bit for brushing
  • 1/2 cup of Port wine (I dont have red label / sweet red wine)
  • 1 cup plain flour
  • 1 tbsp cinnamon
  • 1 tbsp mixes spice
  • 1 tbsp gratted nutmeg
  • 1.5 tbsp baking powder
  • 1/2 cup bread crumbs
  • 3 tbsp black treacle (I dont have “browning liquid”)


  • Pre-heat oven at 180C. Grease a cake tin.
  • Cream butter and brown sugar in a bowl. Use a wooden spoon initially and then you can use this whisk. The video use an electric whisker but I think I managed a decent mixture. You want something creamy and fluffy.
  • In another bowl, mix the eggs, lime juice and lime zest.
  • Add the egg mix to the butter mix bit by bit, whisking constantly.
  • In another bowl, mix the blended fruit, vanilla, almond liquor, rum and Port.
  • And the fruit mix to the butter/egg mix bit by bit, whisking constantly.
  • Clean one of the bowl. Add the flour, cinnamon, mixes spice, nutmeg, baking powder and bread crumbs. Mix well.
  • Add the flour to the wet mix, bit by bit and mixing constantly.
  • Finally, add the black treacle that should bit the dark color to the cake. Mix well.
  • Pour the cake mix into the tin. Shake until level.
  • Put a small bowl with water in the oven or spray with water the oven to create extra moisture.
  • Bake for 1h 15m aprox. Remove from oven only when a skewer comes out clean from the center of the cake.
  • Once you take the cake from the oven, brush it with white rum while hot.
  • Leave it cool down for 1h.

The real thing:

My thing:

To be honest, although my version doesnt look like the original one, it was tasty. I think these were my errors:

  • I didnt soak the dry fruits so they didnt blend properly. I need to find more info about how to prepare this part properly. I think this is the reason the cake is not as “dense” as the original.
  • I think I over baked it. I lost track of time and it was 1h 30m I think.
  • The black treacle doesnt give the same black color as in the video. Or I need to put more?
  • Use more water in the oven. The first video didnt use any but the second did it so I though the second version was more moist and I wanted that.
  • Although I didnt use Jamaican rum neither Jamaican red wine, the taste was good.


Jericho3 is the new chip from Broadcom to take into NVIDIA infiniband. From that article, I dont really understand the “Ramon3” fabric. It seems it can support 18 ports at 800G (based on 144 serdes at 100G). It has 160 SerDes (16Tbs) for uplink to Ramon3. The goal is to reduce the time the nodes wait on the network so it is not just (port to port) latency. Based on Broadcom testing swapping a 200Gb Infiniband switch with a Jericho3 is 10% better. As well, dont understand what they mean by “perfect load balancing” because the flow size matters (from my point of view) and “congestion free”. Having this working at scale… looks interesting…

But then we have the answer from NVIDIA: spectrum-X. So it is Spectrum-4 switches, with Bluefield3 DPU and software optimization. This is an Ethernet platform. Spectrum-4 looks very impressive definitely. But this sentence, puzzles me “The world’s top hyperscalers are adopting NVIDIA Spectrum-X, including industry-leading cloud innovators.” But most of links I have been reading lately are saying that Azure, Meta, Google are using Infiniband. Now NVIDIA says top hyperscales are adopting Spectrum-X, when Spectrum-4 started shipping this quarter?

And finally, why NVIDIA is pushing for Ethernet and Infiniband? I think this is a good link for that. Based on NVIDIA CEO, Infiniband is great and nearly “free” if you build for very specific application (supercomputers, etc). But for multi-tenant, you want Ethernet. So that kind of explains why hyperscalers likeAWS, GCP, Azure want at the end of the day Ethernet, at least for customers access. At the end of the day, if you have just one (commodity) network, it is cheaper and easier to run/maintain. You dont have a vendor lock like IB.

Will see what happens with all these crazy AI/LLM/ML etc.

AMD MI300 + Meta DC

Reading different articles: 1, 2, 3 I was made aware of this new architecture of CPU-GPU-HMB3 from AMD.

As well, Meta has a new DC design for ML/AI using Nvidia and Infiniband.

Now, Meta – working with Nvidia, Penguin Computing and Pure Storage – has completed the second phase of the RSC. The full system includes 2,000 DGX A100 systems, totaling a staggering 16,000 A100 GPUs. Each node has dual AMD Epyc “Rome” CPUs and 2TB of memory. The RSC has up to half an exabyte of storage and, according to Meta, one of the largest known flat InfiniBand fabrics in the world, with 48,000 links and 2,000 switches. (“AI training at scale is nothing if we cannot supply the data fast enough to the GPUs, right?” said Kalyan Saladi – a software engineer at Meta – in a presentation at the event.)

An again, cooling is critical.

Fat Tree – Drangonfly – OpenAI infra

I haven’t played much with ChatGPT but my first question was “how is the network infrastructure for building something like ChatGPT?” or similar. Obviously I didnt have the answer I was looking for and I think i think ask properly neither.

Today, I came to this video and at 3:30 starts something very interesting as this is an official video as says the OpenAI cluster built in 2020 for ChatGTP was actullay based on 285k AMD CPU “infinibad” plus 10k V100 GPU “infiniband” connected. They dont mention more lower level details but looks like two separated networks? And I have seen in several other pages/videos, M$ is hardcode in infiniband.

Then regarding the infiniband architectures, it seems the most common are “fat-tree” and “dragon-fly”. This video is quite good although I have to watch it again (or more) to fully understand.

These blog, pdf and wikipedia (high level) are good for learning about “Fat-Tree”.

Although most info I found is “old”, these technologies are not old. Frontier and looks like most of supercomputers use it.

Strawberry Roulade – v2

I have done this dessert before but this recipe is a bit different in the process. At least I think the result was much better.

Roulade Ingredients:

  • 6 large eggs (room temperature)
  • 250g caster sugar
  • 250g plain flour
  • 1 tbsp warm water
  • butter and sugar for greasing
  • 1 tray + baking paper
  • 150g finely chopped strawberries + 20g for garnish
  • Mint leaves

Stock Syrup Ingredients:

  • 50gr sugar
  • 50gr water

Pastry Cream Ingredients:

  • 4 medium egg yolks
  • 65g caster sugar
  • 15g plain flour
  • 15g cornflour
  • 350ml whole milk
  • 1/2 tsp vanilla paste
  • Icing sugar for dusting

Chantilly Cream Ingredients:

  • 2 tsp icing sugar
  • 200ml double cream
  • 1 tsp vanilla paste


  • Pre-heat oven at 180C
  • Grease the tray (30x25cm), then cut the baking paper to fit the tray. Be sure it stick properly. Grease a little bit the paper too.
  • Dust some flour and sugar over the baking paper. Leave a side.
  • Beat the eggs and sugar in a large bowl with an electric whisk for 15 minutes. The mix should almost triple in volume, become pale in color and thick enough that will leave a trail when lifting the whisk.
  • Sift the flour. Add 1/3 at the time to the egg mix and fold it nicely until all combine.
  • And 1 tsp of water and fold.
  • Pour the mixture into the tray and smooth the edges with the spatula. Make it as even as you can without touching too much the mix.
  • Bake for 9-10 minutes. Lightly golden and just firm to the touch.
  • Lay out a clean damp cloth on your work surface (so it doesnt move). Place a piece of greaseproof paper (a bit bigger than the tray) on top of the cloth. Cover a bit of the paper edges with the towel so it doesnt move. Dust some sugar on top of the paper.
  • Once the sponge is ready, turn it out quickly over the dusted paper (topside of the sponge down). Be careful, the try is hot!!!! Peel the paper off the sponge.
  • Start rolling the sponge by the longest side, at the beginning be sure it is tight, use the towel/paper, then roll all the way. Tap the roll so it flattens a bit, let it cool down with the seam in the bottom.

Prepare Stock Syrup:

  • Boil sugar and water. Stir for 1 minute and let it rest

Prepare Pastry Cream:

  • Whisk eggs and sugar until pale gold color.
  • Add the flour and cornflour to the egg mix, whisk until combine. Rest aside.
  • Place milk and vanilla in a saucepan, heat up until simmer, stirring frequently. Remove from heat.
  • Slowly pour half of the hot milk into the egg mix, whisking all the time. Be sure the milk is not too hot because you dont want to cook the eggs! Add the rest of the milk and whisk.
  • Return the whole mix to the saucepan, bring to the boil and simmer for 1 minute, whisking constantly so it is smooth.
  • Pour the cream in a bowl, dust with icing sugar and cover with cling film.
  • Set aside

Filling the roulade:

  • Unroll the sponge carefully, it should be moist and sponge to the touch.
  • Brush the stock syrup all over the sponge.

  • You can trim the sides so it is even when rolling later.
  • Take the pastry cream, whisk it again so it becomes creamy again. Spread 3/4 of the cream on top of the sponge and spread as much as you can over the sponge. Add the rest of the cream to cover places. Be sure you leave 2cm without cover in the side that you will finish rolling so the roll sticks and “closes”.
  • Spread all chopped strawberries over the cream. Tap them so they are level with the cream when rolling
  • Roll the longer side inwards, be sure you dont start from the side without cream! Use the paper to keep the roll tight at the very beginning, then release the paper and roll the whole sponge.
  • Keep the seam down. Trim the sides to be sure they show cream and strawberries.
  • Dust the top of the roll and with a sharp knife JUST mark the slices without cutting them.

Prepare Chantilly Cream:

  • Whisk in a bowl the cream, sugar and vanilla until just start making peaks. Dont over do it because we are going to pipe it!


  • Pipe a rose over each slice, add some strawberries and mint for garnish.

Meta Chips – Colvore water-cooling – Google AI TPv4 – NCCL – PINS P4 – Slingshot – KUtrace

Read 1. Meta to build its own AI chips. Currently using 16k A100 GPU (Google using 26k H100 GPU). And it seems Graphcore had some issues in 2020.

Read 2. Didnt know Colovore, interesting to see how critical is actually power/cooling with all the hype in AI and power constrains in key regions (Ashburn VA…) And with proper water cooling you can have a 200kw rack! And seems they have the same power as a 6x bigger facility. Cost of cooling via water is cheaper than air-cooled.

Read 3. Google one of biggest NVIDIA GPU customer although they built TPUv4. MS uses 10k A100 GPU for training GPT4. 25k for GPT5 (mix of A100 and H100?) For customer, MS offers AI supercomputer based on H100, 400G infiniband quantum2 switches and ConnectX-7 NICs: 4k GPU. Google has A3 GPU instanced treated like supercomputers and uses “Apollo” optical circuit switching (OCS). “The OCS layer replaces the spine layer in a leaf/spine Clos topology” -> interesting to see what that means and looks like. As well, it uses NVSwitch for interconnect the GPUs memories to act like one. As well, they have their own (smart) NICS (DPU data processing units or infrastructure processing units IPU?) using P4. Google has its own “inter-server GPU communication stack” as well as NCCL optimizations (2016! post).

Read4: Via the P4 newletter. Since Intel bought Barefoot, I kind of assumed the product was nearly dead but visiting the page and checking this slides, it seems “alive”. Sonic+P4 are main players in Google SDN.

 “Google has pioneered Software-Defined Networking (SDN) in data centers for over a decade. With the open sourcing of PINS (P4 Integrated Network Stack) two years ago, Google has ushered in a new model to remotely configure network switches. PINS brings in a P4Runtime application container to the SONiC architecture and supports extensions that make it easier for operators to realize the benefits of SDN. We look forward to enhancing the PINS capabilities and continue to support the P4 community in the future”

Read5: Slingshot is another switching technology coming from Cray supercomputers and trying to compete with Infiniband. A 2019 link that looks interesting too. Paper that I dont thik I will be able to read neither understand.

Read6: ISC High Performance 2023. I need to try to attend one of these events in the future. There are two interesting talks although I doubt they will provide any online video or slides.

Talk1: Intro to Networking Technologies for HPC: “InfiniBand (IB), High-speed Ethernet (HSE), RoCE, Omni-Path, EFA, Tofu, and Slingshot technologies are generating a lot of excitement towards building next generation High-End Computing (HEC) systems including clusters, datacenters, file systems, storage, cloud computing and Big Data (Hadoop, Spark, HBase and Memcached) environments. This tutorial will provide an overview of these emerging technologies, their offered architectural features, their current market standing, and their suitability for designing HEC systems. It will start with a brief overview of IB, HSE, RoCE, Omni-Path, EFA, Tofu, and Slingshot. In-depth overview of the architectural features of IB, HSE (including iWARP and RoCE), and Omni-Path, their similarities and differences, and the associated protocols will be presented. An overview of the emerging NVLink, NVLink2, NVSwitch, Slingshot, Tofu architectures will also be given. Next, an overview of the OpenFabrics stack which encapsulates IB, HSE, and RoCE (v1/v2) in a unified manner will be presented. An overview of libfabrics stack will also be provided. Hardware/software solutions and the market trends behind these networking technologies will be highlighted. Sample performance numbers of these technologies and protocols for different environments will be presented. Finally, hands-on exercises will be carried out for the attendees to gain first-hand experience of running experiments with high-performance networks”

Talk2: State-of-the-Art High Performance MPI Libraries and Slingshot Networking: “Many top supercomputers utilize InfiniBand networking across nodes to scale out performance. Underlying interconnect technology is a critical component in achieving high performance, low latency and high throughput, at scale on next-generation exascale systems. The deployment of Slingshot networking for new exascale systems such as Frontier at OLCF and the upcoming El-Capitan at LLNL pose several challenges. State-of-the-art MPI libraries for GPU-aware and CPU-based communication should adapt to be optimized for Slingshot networking, particularly with support for the underlying HPE Cray fabric and adapter to have functionality over the Slingshot-11 interconnect. This poses a need for a thorough evaluation and understanding of slingshot networking with regards to MPI-level performance in order to provide efficient performance and scalability on exascale systems. In this work, we delve into a comprehensive evaluation on Slingshot-10 and Slingshot-11 networking with state-of-the-art MPI libraries and delve into the challenges this newer ecosystem poses.”

Read7: Slides and Video. I was aware of Dtrace (although never used it) so not sure how to compare with KUtrace. I guess I will ask Chat-GPT 🙂

Read8: Python as programming of choice for AI, ML, etc.

Read9: M$ “buying” energy from fusion reactors.

Mobile Phone + SSH server

I have tried many times to connect my mobile phones to my laptop. It always looks easy if you use M$ but with Linux I always fail, I can’t get to work MTP. So now I really want to take mainly all my pictures from a phone and be able to back them up and transfer to a new one. I dont want to use cloud services or tools from the manufacturers. I want to use old school methods. So after struggling for some time, I somehow decided to use something as old school as SSH/SCP. Android is based in linux, isn’t it? So I searched for a free SSH server app, found this one. And it worked! I managed to understand it, created my user, my mounting points, enable it… and was able to SCP all my photos from my mobile to my laptop. It worked with Samsung and Huawei.

I am pretty sure that people know have better ways to do this… but that’s me.

Chocolate Fondant

Based on this video:


  • 130g dark chocolate (+70%)
  • 130g butter + extra for greasing
  • 3 eggs
  • 1 egg yolk
  • 100g caster sugar
  • 70g plain flour
  • 15g cocoa powder + extra for dusting
  • 2.5g baking powder
  • 4 aluminium moulds


  • Pre-heat oven at 180C
  • Melt chocolate and butter at “baine marie”. Let it cool down a bit to use later.
  • Grease with butter the moulds and dust the sides with cocoa powder
  • Make the “raw sabayon”: whisk the eggs and sugar until pale in color.
  • Fold the chocolate (be sure it is not too hot) into the sabayon. Be sure the mix is uniform and there are no lumps.
  • Sieve flour, cocoa and baking powder. Then add to the mix bit by bit, folding with a spatula and checking there are no lumps.
  • Fill the moulds at 90% aprox. They will raise in the oven
  • Bake the moulds at 180C for 9 minutes aprox.
  • Use a tooth stick to be sure they are still creamy inside. The idea is the chocolate should come out once opened.
  • Unmould and present on a plate with a bit of fresh mint, strawberries. Dust with a bit of icing sugar.
  • Be sure you serve it hot! (optionally you can add a ball of vanilla ice cream).