Websocket troubleshooting

Today I had to troubleshoot a websocket issue. I had never dealt with this before. I was told that HAproxy config was fine that it was to be our NGFW doing something nasty at L7.

The connection directly to the server doing websocket was fine from my PC but for some requirement we need to put that server behing a HAproxy. From my PC to the haproxy that is doing “proxy” fore the websocket service failed…

Funny enough HAproxy and the websocket service were running in the same host.

As usual I took a look at the firewall logs. Nothing wrong there at first sight. I took a tcpdump from my pc when connecting to the websocket service and to the haproxy.

The service is very verbose and it is difficult to follow in the capture files as it spawns several connections. I went to the easy part, the capture to the haproxy was showing a lot of TCP retransmissions… The other trace to the websocket service was pretty clean.

Taking into account that the path from my PC to the haproxy server is the always the same (and I was going through a VPN) I could think it was a NGFW issue or something between HAproxy and the websocket service (that is a localhost connection).

As well, I was seeing weird things latency wise. Some TCP resets were taking more than 200ms to arrive to the server when the average RTT was 3ms.

I tried to take a tcpdump between the haproxy service and the websocket service just in case that packet loss was caused locally. The capture was chaos to follow. I had to understand better the sessions in HAproxy.

I changed direction and I went to the NGFW and created a rule that disabled any fancy security check for me to the haproxy server. I wanted to be sure the firewall was innocent.

It was. Same issue. I tried different browsers and always the same.

So I was nearly sure the problem was in HAproxy but I had to prove it. I kind of failed checking the backend connection (haproxy to websockt proxy) so I took again a look to the trace from my pc to haproxy. I was quite frustrated because there was so many connetions openned and then retransmissions started happening that I couldnt really see any problem.

By luck, I noticed that in the good trace (the one going directly to the websocket service) I could see a HTTP GET request for “socket” from my PC. Keep in mind that I have no idea how websocket works. I tried to find a similar request in the haproxy trace, and I saw the problem….

Rejected HTTP GET socket request

and this is a good connection:

Successful HTTP GET socket request

So at the end, HAproxy was at fault (we dont know how to fix it though yet) and my firewall (for once) it is innocent.

The summary, I got overwhelmed by the TCP retransmissions. I was lucky that I saw the GET socket and I assumed that had to be the way to get the websocket connection established. So I should have started investigating how a websocket connections is stablished. As well, I didnt manage to find the HAproxy logs, I am pretty sure I should have found the same answer. So I need to learn to check that.

I learned something new. As usual, it didnt come easy neither quick 🙂

Bloom filters, profiling, performance numbers

One more cloudflare blog that I had in the to-read list:

https://blog.cloudflare.com/when-bloom-filters-dont-bloom

I had never heard about Bloom filters so that was interesting and the actual uses of them:

https://en.wikipedia.org/wiki/Bloom_filter#Examples

I like his point to chose  ‘m’, number of bits in the bit array, to be a power of two (module operation becomes a bitwise AND):

https://stackoverflow.com/questions/41183935/why-does-gcc-use-multiplication-by-a-strange-number-in-implementing-integer-divi

But at the end, it is not all about the Bloom filters. It is understanding how things work under the hood and see if they are actually delivering, if not, you should change your approach. So the debugging section “A secret weapon – a profiler” is very good. Profiling is not one of my strengths so the tools used are the ones I need to understand and use more often:

strace -cf
perf stat -d
perf record
perf record | head -n 20
perf annotate process_line --source
google-perftools' with kcachegrind  

As well the reference to the performance numbers that are good to have in mind:

http://highscalability.com/blog/2011/1/26/google-pro-tip-use-back-of-the-envelope-calculations-to-choo.html

So I take a copy here:

  • L1 cache reference 0.5 ns
  • Branch mispredict 5 ns
  • L2 cache reference 7 ns
  • Mutex lock/unlock 100 ns
  • Main memory reference 100 ns
  • Compress 1K bytes with Zippy 10,000 ns
  • Send 2K bytes over 1 Gbps network 20,000 ns
  • Read 1 MB sequentially from memory 250,000 ns
  • Round trip within same datacenter 500,000 ns
  • Disk seek 10,000,000 ns
  • Read 1 MB sequentially from network 10,000,000 ns
  • Read 1 MB sequentially from disk 30,000,000 ns
  • Send packet CA->Netherlands->CA 150,000,000 ns 

Things to take in mind:

  • Notice the magnitude differences in the performance of different options.
  • Datacenters are far away so it takes a long time to send anything between them.
  • Memory is fast and disks are slow.
  • By using a cheap compression algorithm a lot (by a factor of 2) of network bandwidth can be saved.
  • Writes are 40 times more expensive than reads.
  • Global shared data is expensive. This is a fundamental limitation of distributed systems. The lock contention in shared heavily written objects kills performance as transactions become serialized and slow.
  • Architect for scaling writes.
  • Optimize for low write contention.
  • Optimize wide. Make writes as parallel as you can.

As well, “The lessons learned” is a great summary of his trip.

  • Sequential memory access great / Random memory access costly -> cache prefetching
  • Advanced data structures to fit L3: optimize for reduced number loads than the amount of memory used.
  • CPU hits the memory wall

So another great post from Marek.

Linux network monitoring

I use gkrellm as my linux monitoring app. I have used it since I started but something I miss is I would like to know what app and destination IPs are causing a traffic spike in my laptop.

Searching a bit a come up with this page with several tools:

Based on my requirement, it seems I need two apps.

  • nethogs: For finding out the process triggering the traffic spike
  • pktstat: For finding out the IPs involved.

Now it is case of remembering the commands 🙂 But as far as I have tested. It seems they can do the job.

ZFS Basic

A couple of weeks ago, at work, sysadmin guys were working on some ZFS issues. They were talking about ZIL and ARC, and I had no idea what was that.

I always wanted to run ZFS, so I think early 2019 I configured my laptop to use ZFS, not in the root partition but in a different partition. I had to configure my Debian Testing to support ZFS (I dont remember if it was very difficult) and then backup some data to make room for my new ZFS partition.

For ZFS basics, you can follow the link below but there are many good tutorial searching in your favourite engine:

In my case, it is a laptop, so I just have one pool that is based on my LV “storage”. I think this was the command I used:

#zpool create -o mountpoint=/home/username storage /dev/mapper/laptop--vg-storage

That would give me the following:

# zpool status
  pool: storage
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
	still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
	the pool may no longer be accessible by software that does not support
	the features. See zpool-features(5) for details.
  scan: scrub repaired 0B in 0 days 00:10:39 with 0 errors on Sun Jan 12 00:34:40 2020
config:

	NAME                  STATE     READ WRITE CKSUM
	storage               ONLINE       0     0     0
	  laptop--vg-storage  ONLINE       0     0     0

errors: No known data errors
# 

And that would be mounted where I requested

$ df -hT | grep zfs
storage        zfs       176G   73G  103G  42% /home/username/storage

This is too basic, in most cases your will want to have a kinf of RAID. But again, this is a simple laptop. As well, you can configure snapshots (useful if you want to have rollback a server upgrade that involves a huge amount of data) and other performance parameters (as per document below):

https://www.percona.com/live/17/sites/default/files/slides/pl17_ZFS_MySQL_Salesforce_0.pdf

So once you have your ZFS configured and mounted you can work with it as usual.

So back to the ZIL and ARC. Based on the links below:

https://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/
  • ZFS Intent Log, or ZIL, to buffer WRITE operations.
  • ARC and L2ARC which are meant for READ operations.

In my laptop, I dont have any space left to play with this, so I can only check in my employer systems.

TCP Thin-Stream Modifications

I have read the below article and I am going to give it a go in my laptop

https://www.simula.no/file/lj-219-jul-2012pdf/download

First, check the status of tcp thin:

# sysctl net.ipv4.tcp_thin_linear_timeouts
net.ipv4.tcp_thin_linear_timeouts = 0

I have realised that I dont have “/proc/sys/net/ipv4/tcp_thin_dupack” as the article mentions…

Ok, let update the value and be sure is still active after reboot.

Enable the value:

# echo "1" > /proc/sys/net/ipv4/tcp_thin_linear_timeouts
# sysctl net.ipv4.tcp_thin_linear_timeouts
net.ipv4.tcp_thin_linear_timeouts = 1

Make it permanent, edit /etc/sysctl.conf like this

# Based on https://www.simula.no/file/lj-219-jul-2012pdf/download
# enabling tcp thin-steam modifications for reducing latency in interactive apps
net.ipv4.tcp_thin_linear_timeouts = 1

Now it is time to test and see if you see any improvement or degradation!

Microarchitecture

This week attended a webinar from Alex Blewitt about CPU microarchiteture to increase application performance. The link was sent by a work colleague but you can get pdf and see the presentation from the below source:

https://www.infoq.com/presentations/microarchitecture-modern-cpu

The presentation is obviously very interesting. You need to know your CPU to get the most of it.

How can I see my CPU architecture? “lstopo” (graphic) or “lstopo-no-graphics” is your friend. This is from my laptop.

# lstopo-no-graphics
Machine (7934MB total)
Package L#0
NUMANode L#0 (P#0 7934MB)
L3 L#0 (4096KB)
L2 L#0 (256KB) + L1d L#0 (32KB) + L1i L#0 (32KB) + Core L#0
PU L#0 (P#0)
PU L#1 (P#2)
L2 L#1 (256KB) + L1d L#1 (32KB) + L1i L#1 (32KB) + Core L#1
PU L#2 (P#1)
PU L#3 (P#3)
HostBridge
PCI 00:02.0 (VGA)
PCIBridge
PCI 02:00.0 (Network)
Net "wlp2s0"
PCI 00:1f.2 (SATA)
Block(Disk) "sda"
Misc(MemoryModule)
Misc(MemoryModule)
#

As you can see, my humble laptop just have one NUMA node, with two cores/processors, and two hyperthreads per core.

But in a server, very likely you will have more NUMA nodes, more cores and more processors so you want to be sure that is used properly.

I am not expert in CPU performance at all but there many important points like memory allocation, huge pages, pinning memory/threads (isolcpu, taskset, etc), compiler strategies and tools to test the performance. Some of them ring the bell and it is nice to know that exist. You never know when you will have to dive in this type of water.

LVM 101 + Linux disk encryption

Once more post from Cloudflare. I think most Linux distributions already offer by default transparent disk encryption. As far as I can see in my Debian, I have encryption with LVM. I need to write a post about LVM as I have always to google most basic command. “Logic Volume Manager” (LVM) is an abstraction layer for managing storage (maybe too basic explanation but that is how I understand it). When I built my laptop, I had the option (I think it was by default) to choose LVM + encryption (dm_crypt module). So I took that.

So first, how I check my LVM? Well, df -h, will give the first clues

# df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs tmpfs 794M 2.7M 791M 1% /run
/dev/mapper/laptop--vg-root ext4 24G 17G 6.3G 73% /
tmpfs tmpfs 3.9G 414M 3.5G 11% /dev/shm
tmpfs tmpfs 5.0M 8.0K 5.0M 1% /run/lock
tmpfs tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda2 ext2 237M 155M 70M 69% /boot
/dev/sda1 vfat 496M 60M 437M 13% /boot/efi
/dev/mapper/laptop--vg-home ext4 20G 9.9G 8.7G 54% /home
tmpfs tmpfs 794M 24K 794M 1% /run/user/1000

You see thing with “/dev/mapper” and “vg” (volume group). So you have LVM running.

Some basic LVM notes:

# pvs –> it will show the physical disks, partitions, etc used in your LVM setup and the “vgs” they belong to. PVS stands for “physical volume system”. In my case only the partition sda3 from my physical disk is part of LVM. Physical volumes are used to create Volume groups.

# pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/sda3_crypt laptop-vg lvm2 a-- 237.73g <2.62g

# vgs –> it will show you the volumes in your system, the number of PV they are using and the number of LV they are providing. VGS stands for “volume group system”. In my case, I have just one VG, that is use 1 PV and is providing 4 LV.

# vgs
VG #PV #LV #SN Attr VSize VFree
laptop-vg 1 4 0 wz--n- 237.73g <2.62g

#lvs –> it will show the “logical volumes” you have created from a VG. In my case, I have four LV.

# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home laptop-vg -wi-ao---- 22.00g
root laptop-vg -wi-ao---- 24.31g
storage laptop-vg -wi-ao---- 182.00g
swap_1 laptop-vg -wi-ao---- 6.80g

BTW, how I can see all the partitions in my machine, “fdisk -l”

root@athens:/boot# fdisk -l
Disk /dev/sda: 238.49 GaiB, 256060514304 bytes, 500118192 sectors
Disk model: NISU SSD ALLI
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: TRALARI-TRALARI-TRALARI-TRALARI
Device Start End Sectors Size Type
/dev/sda1 2048 1050623 1048576 512M EFI System
/dev/sda2 1050624 1550335 499712 244M Linux filesystem
/dev/sda3 1550336 500117503 498567168 237.8G Linux filesystem

So based on our “pvs” we know “dev/sda3” is part of LVM. How the encryption is happening? The type of partition will tell us

# blkid /dev/sda3
/dev/sda3: UUID="f6263aee-3966-4c23-a4ef-b4d9916f1a07" TYPE="crypto_LUKS" PARTUUID="b224eb49-1e71-4570-8b62-fb38df801170"
#

So, “crypto_LUKS” is key. Our LVM is running over a partition that is encrypted.

So after this detour, lets go back to Cloudflare post about Linux disk encryption.

I really enjoyed the kind of forensic work trying to discover when and why the changes in the Linux kernel code (!) were happening and how affected the speed. BTW, I crashed my laptop when trying to run their tests!

https://blog.cloudflare.com/speeding-up-linux-disk-encryption

Iptables Conntrack

I am subscribed to Cloudflare blog as they are in general really good. And definitely, you always learn something new (and want to cry because you have so much to learn from these guys).

This time was a dissection of conntrack in iptables to improve their firewall performance.

https://blog.cloudflare.com/conntrack-tales-one-thousand-and-one-flows

I never thought about the limits of the conntrack table and how important is to have in mind (or make a tattoo of) the iptables diagram:

SSH Keys

I already use RSA ssh keys to access my VPS but a friend of mine send me a link about ED25519 public-key algorithm. But why ssh-keys? Mainly to avoid to type your password every single time.

https://medium.com/risan/upgrade-your-ssh-key-to-ed25519-c6e8d60d3c54

I will not explain the maths behind because I can’t (but I would love to understand) so wikipedia can do a better work (and in the main time, think of donating a few bucks 🙂

https://en.wikipedia.org/wiki/EdDSA

If you still want to generate RSA keys (you can have both), this is my go-to link:

https://www.cyberciti.biz/faq/how-to-set-up-ssh-keys-on-linux-unix/

Summary, just in case the links disappear:

# create your key RSA or Ed25519

$ ssh-keygen -t rsa -f ~/.ssh/id_rsa4096 -b 4096 -C "user@origin"

or

$ ssh-keygen -o -a 100 -t ed25519 -f ~/.ssh/id_ed25519 -C "user@origin"

# Add your priv key into your ssh-agent so it is used when connecting to the destination

$ ssh-add ~/.ssh/id_xxx

# Copy your PUBLIC!!! key to the remote server you want to login with that key (and so you dont need to type a password)

$ ssh-copy-id -i .ssh/id_xxx.pub user@remove_server

# Test your new ssh-key

$ ssh -i ~/.ssh/id_xxx user@remove_server

Linux Network Namespaces

At work, we use a vendor whose Network Operating System (NOS) is based in Linux. I am a network engineer so I was troubleshooting an issue inside a VRF. I couldn’t use much of the normal commands in the default VRF. So I opened a ticket with the vendor and learned a bit how the VRFs are implemented under the hoods. Obviously (not for me) they use Linux Namespaces, after googling the meaning of the commands they sent. My search brought me to the following links:

This is a good intro:

https://blog.scottlowe.org/2013/09/04/introducing-linux-network-namespaces/

From this link, I took some examples in my quick search

https://kashyapc.fedorapeople.org/virt/openstack/neutron/neutron-diagnostics.txt

At the end I used commands like these:

$ sudo ip netns list
$ sudo ip netns exec ns-INET ip link list
$ sudo ip netns exec ns-VRF1 arp -a
$ sudo ip netns exec ns-VRF1 route -n
$ sudo ip netns exec ns-VRF1 telnet -b src_ip dst_ip port
$ sudo ip netns exec ns-VRF1 tcpdump -i lo4 -nn  tcp 179
$ sudo ip netns exec ns-VRF1 ss --tcp --info
$ sudo ip netns exec ns-VRF1 ss --tcp --info -nt src IP

As well, “ss” is such a useful command for troubleshooting and I always feel that I dont make the most of it: