Discord – 68 Bits – HC2023 Google – Huge – Terrapin

Discord Scale: I think I read something about Elixir (and BEAM). So It was nice to side a successful product built with it. And how Discord has managed to keep pushing the scale of their platform. Everything is high level but gives you an idea.

68 Bits of advice: From Kevin Kelly

HotChips 2023: I received an email with all presentations and videos. Some picked my curiosity (although ALL of them are out of my understanding

  • Exciting Directions for ML Models and the Implications for Computing Hardware: video and pdf. A lot of focus in power consumption and reduce CO2. The optical I am still struggling. But it is interesting that they say they go for liquid cooling and beyond Ethernet for the supercomputer.
  • Inside the Cerebras Wafer-Scale Cluster: video and pdf. I have read about Cerebras before so it was nice to read/see something directly from them.

They made Google Huge: based on link. From the google presentation above, and the end there are a lot of references about the authors. I think I read about it in the past but It was nice to re-read it again.

Terrapin: SSH vulnerability. I need to patch 🙁

BIND performance – LACP Troubleshooting – Chiplets – AI/HPC Networking – Spray ML/AI workloads

BIND: Interesting links about BIND performance and the lab setup. DNS is the typical technology that looks straightforward but as soon as you dig a bit, it is a world itself

LACP: Interesting blog about troubleshooting details. As above, this is the typical tech that you give for granted that works but then, you need to really understand how it works to troubleshoot it. So I learned a bit (although the blog is “old”)

Chiplets: Very good blog. Explaining the origin of getting to chiplets. Interesting evolution and good touch to mention the network industry, and not just CPU/GPU.

As the process node shrank, manufacturing became more complex and expensive, leading to a higher cost per square millimeter of silicon. Die cost does not scale linearly with die area. The cost of the die more than doubles with doubling the die area due to reduced yields (number of good dies in a wafer).

Instead of packing more cores inside a large die, it may be more economical to develop medium-sized CPU cores and connect them inside the package to get higher core density at the package level. These packages with more than one logic die inside are called multi-chip modules (MCMs). The dies inside the multi-chip modules are often referred to as chiplets.

AI/HPC Networking: Nice summery about AI vs HPC, and what each hyperscaler and vendors are doing. For me is quite interesting how to get proper loadbalacing of flows like AWS SDR. This should be an actual standard by any network vendor or software to aim to that goal. I guess it is not easy.

High performance requirements can create a vendor lock-in. Doesn’t matter if it is IB or Ethernet. So pick your evil.

Spray ML/AI worloads: Based on the above regarding the loadbalacing, this is an interesting article about how to generate loadbalancing in ML workloads when it is based in just one elephant flow. So you need Adaptive routing in your fabric/switches, NICs that support it and support from your code/library.

AWS Intent-Driven 2023- Groq – Graviton4 -Liquid Cooling – Petals – Google – Crawler – VAX – dmesg

AWS Reinvent Intent-Driven Network Infra: Interesting video about Intent-driven networking in AWS. This is the paper he shows in the presentation. Same note as last year, leaf-spine, pizza boxes, all home made. The development of the SIDR as the control plane for scale. And somehow the talk about UltraCluster for AI (20k+ GPU). Maybe that is related to this collaboration NVIDIA-AWS. Interesting that there is no mention to QoS, he said no oversubscription. In general, everything is high level, and done in-house, and very likely they facing problems that very few companies in the world are facing. Still would be nice to open all those techs (like Google has done – but never for network infra). As well, I think he hits the nail on the head how he defines himself from Network Engineer to Technologist, as at the end of the day, you touch all topics.

AWS backbone: No chassis, all pizza boxes

Graviton4: More ARM chips in cloud-scale

Groq: Didnt know this “GPU” alternative. Interesting numbers. Let’s see if somebody buys it.

Petals: Run LLMs bittorrent style!

Google view after 18 years: Very nice read about the culture shift in the company, from do not evil, to make lots of many at any cost.

GTP-Crawler: Negative thing, you need the pay version of chatgpt. I wonder, If I crawke cisco, juniper and arista, what would be nearly all network knowledge in the planet? If that crawler can get ALL that date.

Linux/VAX porting: Something that I want to keep (ATP).

dmesg -T: How many times (in even more years!!!!) I wondered how to make those timestamp to something I could compare with then debugging.

VimGPT – Maia AI – Mirai – Reptar – Mellanox Debian – RISC-V DC – Mojo – Moors Law

VimGTP: Very interesting project. I haven’t used it. But thinking aloud, you could use it to interact with sites that dont have API (couriers)? I think with Selenium you can do things like that?

Maia AI: CLoud providers like to be masters of their own destiny so try to build as many things by themselves as possible. So now MS has developed its GPU for AI. It is quite interesting the custom rack they had to built with the sidekick for cooling down the new chips. There are no many figures about the chip (5nm, 105b transistors) to compare with other things in the market.

Reptar: new Intel CPU vulnerability. It looks like is a feature from Ice Lake architecture. It looks like you can crash the cores but no yet take over. Still interesting.

I am not affected 🙂

$ grep fsrm /proc/cpuinfo
$

Mellanox with Debian: Interesting how you can install a nearly standard Debian into a Mellanox SN2700 switch.

RISC-V into datacenter: Happy to see RISC-V chips in the datacenter. But not clear who is going to use them?

Mirai history: I think most of wired articles read like a holywood movie 🙂 Although 2016 security issues are “old” school, still interesting how teenagers got that far.

Mojo: Interesting because of the people behind of it… really impressive.

Moor’s law analysis: I liked the part about networks, that is not very common mentioned in these type of analysis.

AusNOG 2023

Nice NOG meeting:

Vendor Support API: Interesting how Telstra uses Juniper TAC API to handle power supplies replacement. I was surprised that they are able to get the RMA and just try to replace it. If they dont need it, they send it back… That saves time to Telstra for sure. The problem I can see here is when you need to open ticket for inbound/outbound deliveries in the datacenters, that dont have any API at all. If datacenters and big courier companies had API as 1st class citizends, incredible things could happens. Still just being able to have zero-touch replacement for power supplies is a start.

No Packet Behind – AWS: I think until pass the first 30 minutes, there is nothing new that hasnt been published in other NOG meeting between 2022 and 2023. At least the mention the name of the latest fabric, Final Cat. As well, they mention issues with IPv6 deployment.

There are other interesting talks but without video so the pdf only doesnt really give me much (like the AWS live premium talk)

Wistron

I have never heard of Wistron until I reached this page. Maybe it is because:

Wistron typically only sells to hyper-scalers

I guess the hyperscalers put their own NOS on top? Anyway, quite interesting the model with 16 ports for Optical SN (that is 4x400G per port).

Networking Scale 2023

This is a conference about networks that I was interested and I finally got some emails with the presentations. They are mainly from Meta.

Meta’s Network Journey to Enable AI: video – second part interesting.

  • AI fabric (backend: gpu to gpu) hanging from DC fabric.
  • SPC (Space, Power, Cooling)
  • Fiber, Automation
  • RDMA requires (lossless, low-latency, in-order) -> ROCEv2(Ethernet) or IB
  • Servers have 8x400G to TOR. Tor 400G to Spines
  • 1xAI zone per DH. 1xDC has several DHs.
  • Oversubscribed between zones, eBGP, ECMP.

Scaling RoCE Networks for AI Training: video — Really really good.

  • RMDA/IB used for long time in Research.
  • Training: learning a new capability from existing data (focus of the video)
  • Inference: Applying this capability to new data (real time)
  • Distributed training for complex models. GPU to GPU sync -> High BW and low/predictable latency.
  • ROCEv2 with (tuned) PFC/ECN. TE + ECMP (flow multplexing)
  • Oversubscription is fine in spine (higher layer)
  • Challenges: Load balancing (elefant flows), Slow receivers/back pressure, packet loss from L1 issues (those flapping links, faulty optics, cables, etc xD), debugging (find jobs failures)

Traffic Engineering for AI Training Networks : video – interesting both parts.

  • Non-blocking. RTSW=TOR. CTSW=Spine. Fat-Tree Architecture. 2xServer per rack. 1xserver=8xGPU. CTSW=16 downlinks -> 16 uplinks. Up to 208 racks?
  • ROCE since 2020. CTSW are high redix and deep buffer switches.
  • AI Workload challenges: low entropy (flow repetitive, predictable), bursty, high intensity elephant flows.
  • SW based TE: dynamic routing adapted on real-time. Adaptive job placement. Controller (stateless)
  • Data plane: Overlay (features from broadcom chips) and Underly (BGP)
  • Flow granularity: nic to host flow.
  • Handle network failures with minimum convergence time. Backdoor channel with inhouse protocol.
  • Simulation platform. NCCL benchmark.

Networking for GenAI Training and Inference Clusters: video Super Good!

  • Recommendation Model: training 100GFlops/interation. inference: few GFlops/s for 100ms latency.
  • LLM: training 1PetaFlops/sentence (3 orders magnitude > recommendation), inference: 10PF/s for 1sec time-to-first token. +10k GPUs for training. Distributed inferencce. Need Compute too.
  • LLama2 70Billion tokens -> 1.7M hours of GPU. IB 200G per GPU, 51.2 TB/s bisection bw. 800 ZetaFlops. 2 Trillion dataset. 2k A100 GPUs. As well, used ROCEv2 (LLama2 34B).
  • +30 ExaFlops (30% of H100 GPUs fp8 peak) + LLama65B training < 1day.
  • Massive cluster: 32k GPUs! Model Parallelism.
  • LLM inference: dual-edge problem. Prefill large messages (High BW) + Decode small messages (latency sensitive).
  • Scale out (-bw, large domain. Scalable RDMA (IB or Ethernet), data parallel traffic) + Scale up (+BW, smaller domain. NVLink 400G, model parallel traffic)
  • 32k GPU. TOR (252), Spine (18), AGG (18). 3 levels. Oversubscription Spine-Agg 7:1. 8 clusters. 252 racks per cluster. 16 GPUs per rack (8x252x16=32k GPUs). ROCEv2!
  • Model Parallelism harder for computation. Model Parallel traffic: all-reduced/all-to-all, big messages (inside cluster = sclae-up). Data Parallel traffic: all-gather & reduce-scatter (between cluster = scale-out, NVLink)
  • Challenges: Latency matters more than ranking. Reliability !!!!!
  • LLM inference needs a fabric.

Scale out vs scale up: storage DB

scale up (vertical): more bw(links), more storage, etc

scale out (horizontal): distribute load into different devices

Network Observability for AI/HPC Training Workflows: video

  • ROCET: Automating RDMA metric collection and analysis for GPU training. Info from hosts/nics and switches.
  • Report: out-of-sequence, nic flaps, local ack timeouts.
  • PARAM + pytorch. Chakra.