Docker MTU + Docker tcpdump

I am troubleshooting an issue in a docker setup with some Arista cEOS where I can’t ping inside a VRF. First I though it was a MTU issue as when you use MPLS, there is an extra tag in the L2 frame.

…But my pings weren’t that big.

Still wanted to increase the MTU because that’s the expected thing to do in your WAN links if you run MPLS and want your users in different VRFs to be able to use the full 1500 bytes.

After some searching, It seems you can change the default value using the config file as per this link:

$ ip link show docker0
9: docker0: mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:be:73:8c:d3 brd ff:ff:ff:ff:ff:ff
$ cat /etc/docker/daemon.json
{
"data-root": "/home/somebody/storage/docker",
"mtu": 1600
}
$ sudo service docker restart
..
$ ip link show docker0
9: docker0: mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:fb:c0:cf:a2 brd ff:ff:ff:ff:ff:ff

And restart docker. But still had mtu 1500. Checking another link it seems I actually need to create a container so the bridge come up with the new value

$ docker run -d busybox top
...
9: docker0: mtu 1600 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:fb:c0:cf:a2 brd ff:ff:ff:ff:ff:ff

Funny thing, once I started my lab again (using docker-topo) still got MTU 1500!!!

Will have to dig a bit why docker-topo doesnt take the docker mtu 1600 from the config file.

Solution: docker-topo is creating user-defined bridges, so it needs to be told that the mtu is different. The “mtu:1600” in the docker config it is only for the default bridge so when you start the busybox, it is attached to the default bridge and you see 1600.

The other thing I was curious was if I could tcpdump the networks created by docker.

Yes, you can!

# docker network ls

# ifconfig 

# tcpdump -i br-xxxx 

First step into OpenBSD

This week Job Snijders advertised the latest version of openbsd. I have been always a dreamer of being a hacker (like the movies) and the best guys when I was in Uni were Linux users. I had no idea what Linux/Unix/BSD was at that time. At the end (by the 4th year in Uni) I managed to install Linux in my windows PC without destroying anything. And fortunately, I have been using it since then. Learning more and still fortunately, in the last 6 years, using it everyday at work too.

Still very very far away from being a hacker though 🙂

In this time, I have read a bit about the BSD vs Linux threads about licenses, security, etc. And actually I was always keen to learn a bit. In Motorola, I had to use Solaris (even managed to get a certification!).

So this week, I tried to setup a VM in my debian laptop for using OpenBSD 6.7

I found and followed this link, so all credits for the author.

First I downloaded openbsd 6.7 (install67.iso) from here. There are many mirrors around the world. Prepare the file:

/var/lib/libvirt/images# ls -ltr
total 1386064
-rwxr--r-- 1 libvirt-qemu libvirt-qemu 996671488 Apr 6 2018 debian-VAGRANTSLASH-stretch64_vagrant_box_image_9.1.0.img
-rwxr--r-- 1 libvirt-qemu libvirt-qemu 950796288 Nov 29 23:17 centos-VAGRANTSLASH-7_vagrant_box_image_1905.1.img
-rw-r--r-- 1 ss ss 470118400 May 21 23:23 install67.iso
root@athens:/var/lib/libvirt/images# chown libvirt-qemu.libvirt-qemu install67.iso
/var/lib/libvirt/images# mv install67.iso openbsd67.iso

Now start the installation:

/var/lib/libvirt/images# virt-install \
--name=openbsd \
--virt-type=kvm \
--memory=2048,maxmemory=4096 \
--vcpus=2,maxvcpus=2 \
--cpu host \
--os-variant=openbsd5.8 \
--cdrom=/var/lib/libvirt/images/openbsd67.iso \
--network=bridge=virbr0,model=virtio \
--graphics=vnc \
--disk path=/var/lib/libvirt/images/openbsd67.qcow2,size=40,bus=virtio,format=qcow2

Starting install…
Allocating 'openbsd67.qcow2' | 40 GB 00:00:01

Something that confused my was that I was installing openbsd6.7 but the os-variant in the command must be obenbsd5.8. Anything else, fails.

In my setup, I have virt-viewer installed so it opened up and finished the installation using that.

I was surprised how quick was everything and didnt find any problem:

Once I logged in, I felt useless 🙂 I used a bit the shell and tested I could ssh from my host pc to the openbsd vm.

So now, I can find a book of openbsd for dummies and get going!

So close virt-viewer and stop the VM:

/var/lib/libvirt/images# virsh
virsh # list
Id Name State
2 openbsd running
virsh #
virsh #
virsh # destroy openbsd
Domain openbsd destroyed
virsh # list
Id Name State
virsh #
virsh # list --all
Id Name State
openbsd shut off
virsh #

Test we can start up again:

# virsh
Welcome to virsh, the virtualisation interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # list --all
Id Name State
openbsd shut off
virsh # start openbsd
Domain openbsd started
virsh # list --all
Id Name State
3 openbsd running
virsh # exit
#
# virt-viewer

MS Paint

I think the only thing I miss from Microsoft Windows, it is “paint”. I can’t find anything in Linux that is simple and reliable. I have used pinta, gimp and others but they break or they are too pro.

So, if you have internet connection. This is your friend

And if you want something a bit more pro like Visio, then this one

I recommend both a lot!

I know you have GIMP, but I just a simple tool to draw red squares!

Troubleshooting a DCHP Relay connection

Today I have had “fun” troubleshooting an issue that looked easy at first sight. A colleague was trying to PXE boot some server from a network that we haven’t used for a while.

When the server boots up, asks for an IP via DHCP. As we have a centralized DHCP server infrastructure, we have configured DHCP relay in the firewall facing that server to send that request to the DHCP server.

First, let’s take a look at how DHCP relay works. This is a very good link. And this diagram from the mentioned link it is really useful:

One think I learned is the reply (DCHP Offer) doesnt have to use as destination IP the same IP it received as source in DHCP Discover. In the picture, it is packet 2a.

Checking in our environment, we confirm that:

Our server is in 10.94.240.x network. Our firewall is acting as DHCP relay, and send the DHCP Discovery (unicast) to our VIP DHCP Server IP.

The DHCP offer, uses as source the physical IP of the DHCP server and destination is the DHCP relay IP (so it is 10.94.240.1 – the firewall IP in 10.94.240.x network)

Ok, so everything looks fine? No really. The server receives the query, it answers… but we dont see a DCHP Request/ACK.

BTW, keep in mind that DHCP is UDP….

So, we need to see where the packets are lost.

This is a high level path flow between the client and server:

So we need to check this connection is three different firewall vendors….

The initial troubleshooting was just using the GUI tools from Palo/Fortigate. We couldn see anything…. but the server was constantly receiving DHCP Discover and sending DHCP Offer… I dont get it:

# tcpdump -i X udp port 67 or 66 -nn

14:58:06.969462 IP 10.81.25.1.67 > 10.81.251.47.67: BOOTP/DHCP, Request from 6c:2b:59:c1:32:73, length 300
14:58:06.969564 IP 10.81.251.201.67 > 10.94.240.1.67: BOOTP/DHCP, Reply, length 300

14:58:28.329048 IP 10.81.25.1.67 > 10.81.251.47.67: BOOTP/DHCP, Request from 6c:2b:59:c1:32:73, length 300
14:58:28.329157 IP 10.81.251.201.67 > 10.94.240.1.67: BOOTP/DHCP, Reply, length 300

Initially it took me a while to see the request/reply because I was assuming the dhcp request had source 10.94.240.1. So I was seeing only the Reply but not the Request. That was when I went to clarify my head about DHCP Relay and found the link.

So ok, we have the DHCP Request/Reply, but absolutely nothing in the Palo. Is the palo dropping the packets or is forwarding? No idea. The GUI says nothing, I took a packet capture and couldnt see that traffic neither…

Doesnt makes sense.

Let’s get back to basic.

Did I mention DHCP is UDP? So how a next generation firewall (like paloalto) with all the fancy features enable (we have nearly all of them enable…) treats a UDP connection? UDP is stateless… but the firewall is statefull… the firewall creates a flow with the first packet so it can track, any new packet is considered part of that flow. But why we dont see the flows? We actually have only one flow. The firewall has created that session and offloaded to hardware. So you dont see anything else in the control-plane / GUI. The GUI only shows the end of a connection/flow. And as our flow DHCP Relay hasnt’ terminated (it is UDP) and the firewall keeps receiving packets, it is considered life (the firewall doesnt really know what is going on). So for that reason we dont see the connection in the PaloUI. Ok, I got to that point after a while…. I need to proof that the packet from the server is reaching the firewall and it is leaving it too.

How can I do that? Well, I need to delete that flow so the firewall considers a new connection and the tcpdump can see the packets.

This is the a good link from paloalto to take captures. So I found my connection and the cleared it:

palo(active)> show session all filter destination 10.94.240.1

ID Application State Type Flag Src[Sport]/Zone/Proto (translated IP[Port])
Vsys Dst[Dport]/Zone (translated IP[Port])
135493 dhcp ACTIVE FLOW 10.81.251.201[67]/ZONE1/17 (10.81.251.201[67])
vsys1 10.94.240.1[67]/ZONE2 (10.94.240.1[67])
palo(active)>
palo(active)> clear session id 135493

And now, my packet capture in paloalto confirms that it is sending the packet to the next firewall (checking the destination MAC) !!!

Ok, so we confirm the first firewall in the return path was fine…. next one, it is fortigate.

BTW, we were checked and assumed that the routing is fine in all routers, firewalls, etc. Sometimes is not the case… so when things dont follow your thoughts, get back to the very basics….

We have exactly the same issue as in PaloAlto. I can’t see anything in the logs about receiving a dhcp offer from palo and forwarding it to the last firewall Cisco.

And again, we apply the same reasoning. We have an UDP connection, we have a next-generation firewall (with fancy ASIC). And one more thing, in this fortigate firewall, we allow intra-zone traffic, so it is not going to show anyway in the GUI monitor…

So we confirm that we have a flow and cleared it

forti # diag debug flow filter
vf: any
proto: any
Host addr: any
Host saddr: any
host daddr: 10.94.240.1-10.94.240.1
port: any
sport: any
dport: any
co1fw02 #
co1fw02 # diag sys session list
session info: proto=17 proto_state=00 duration=2243 expire=170 timeout=0 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=5
origin-shaper=
reply-shaper=
per_ip_shaper=
class_id=0 ha_id=0 policy_dir=0 tunnel=/ vlan_cos=8/8
state=may_dirty npu synced
statistic(bytes/packets/allow_err): org=86840/254/1 reply=0/0/0 tuples=2
tx speed(Bps/kbps): 36/0 rx speed(Bps/kbps): 0/0
orgin->sink: org pre->post, reply pre->post dev=39->35/35->39 gwy=10.81.25.1/0.0.0.0
hook=pre dir=org act=noop 10.81.251.201:67->10.94.240.1:67(0.0.0.0:0)
hook=post dir=reply act=noop 10.94.240.1:67->10.81.251.201:67(0.0.0.0:0)
misc=0 policy_id=4294967295 auth_info=0 chk_client_info=0 vd=0
serial=141b05fb tos=ff/ff app_list=0 app=0 url_cat=0
rpdb_link_id = 00000000
dd_type=0 dd_mode=0
npu_state=0x001000
npu info: flag=0x81/0x00, offload=6/0, ips_offload=0/0, epid=8/0, ipid=8/0, vlan=0x00f5/0x0000
vlifid=0/0, vtag_in=0x0000/0x0000 in_npu=0/0, out_npu=0/0, fwd_en=0/0, qid=0/0
no_ofld_reason:
total session 1
forti #
forti # diag sys session clear

In other session, I have a packet capture in the expected egress interface:

forti # diagnose sniffer packet Zone3 'host 10.94.240.1'
interfaces=[Zone3]
filters=[host 10.94.240.1]
301.555231 10.81.251.201.67 -> 10.94.240.1.67: udp 300
316.545677 10.81.251.201.67 -> 10.94.240.1.67: udp 300

Fantastic, we have confirmation that the second firewall receives and forwards the DHCP Reply!!!

Ok, now the last stop, Cisco ASA. This is an old firewall, I think it could be my father or Darth Vader.

I dont have the fancy tools for packet capture like Palo/Fortigate…. so I went to the basic “debug” commands and “packet-tracer”.

First, this was the dhcp config in Cisco:

vader/pri/act# show run | i dhcp
dhcprelay server 10.81.251.47 EGRESS
dhcprelay enable SERVERS-ZONE
dhcprelay timeout 60

And, the ACL allows all IP traffic in those interfaces… and couldnt see any deny in the logs.

So, I enabled all debugging things I could find for dhcp:

vader/pri/act# show debug
debug dhcpc detail enabled at level 1
debug dhcpc error enabled at level 1
debug dhcpc packet enabled at level 1
debug dhcpd packet enabled at level 1
debug dhcpd event enabled at level 1
debug dhcpd ddns enabled at level 1
debug dhcprelay error enabled at level 1
debug dhcprelay packet enabled at level 1
debug dhcprelay event enabled at level 200
vader/pri/act# DHCPD: Relay msg received, fip=ANY, fport=0 on SERVERS-ZONE interface
DHCPRA: relay binding found for client f48e.38c7.1b6e.
DHCPD: setting giaddr to 10.94.240.1.
dhcpd_forward_request: request from f48e.38c7.1b6e forwarded to 10.81.251.47.
DHCPD: Relay msg received, fip=ANY, fport=0 on SERVERS-ZONE interface
DHCPRA: relay binding found for client 6c2b.59c1.3273.
DHCPD: setting giaddr to 10.94.240.1.
dhcpd_forward_request: request from 6c2b.59c1.3273 forwarded to 10.81.251.47.
vader/pri/act#

So, the debugging doesnt says anything regarding the packet coming back from Fortigate… Not looking good I am afraid. I wasnt running out of ideas about debug commands. I coudn’t increase an log level neither….

Let’s give a go to packet tracer… doesnt looks good:

vader/pri/act# packet-tracer input EGRESS udp 10.81.251.201 67 10.94.240.1 67
Phase: 1
Type: ACCESS-LIST
Subtype:
Result: ALLOW
Config:
Implicit Rule
Additional Information:
MAC Access list
Phase: 2
Type: ACCESS-LIST
Subtype:
Result: DROP
Config:
Implicit Rule
Additional Information:
Result:
input-interface: EGRESS
input-status: up
input-line-status: up
Action: drop
Drop-reason: (acl-drop) Flow is denied by configured rule

So, we are sure our ACL is totally open but the firewall is dropping the packet coming from fortigate. Why? How to fix it?

Ok, get back to basics. Focus in Cisco config. It uses as DHCP relay server, 10.81.251.47 (VIP). But the DHCP reply is coming from the physical IP 10.81.251.201….. maybe Cisco doesnt like that…. Let’s try to add the physical IPs as a new DHCP server:

vader/pri/act# sri dhcp
dhcprelay server 10.81.251.47 EGRESS
dhcprelay server 10.81.251.201 EGRESS
dhcprelay server 10.81.251.202 EGRESS

Let’s check packet tracer again:

vader/pri/act# packet-tracer input EGRESS udp 10.81.251.201 67 10.94.240.1 67
Phase: 1
Type: ACCESS-LIST
Subtype:
Result: ALLOW
Config:
Implicit Rule
Additional Information:
MAC Access list
Phase: 2
Type: ACCESS-LIST
Subtype:
Result: ALLOW
Config:
Implicit Rule
Additional Information:
Phase: 3
Type: IP-OPTIONS
Subtype:
Result: ALLOW
Config:
Additional Information:
Phase: 4
Type:
Subtype:
Result: ALLOW
Config:
Additional Information:
Phase: 5
Type:
Subtype:
Result: ALLOW
Config:
Additional Information:
Phase: 6
Type: VPN
Subtype: ipsec-tunnel-flow
Result: ALLOW
Config:
Additional Information:
Phase: 7
Type: FLOW-CREATION
Subtype:
Result: ALLOW
Config:
Additional Information:
New flow created with id 340328245, packet dispatched to next module
Result:
input-interface: EGRESS
input-status: up
input-line-status: up
Action: allow
vader/pri/act#

Good, that’s a good sign finally!!!

I think I nearly cried after seeing this in the dhcp logs in our server:

May 12 16:16:27 dhcp1 dhcpd[2561]: DHCPDISCOVER from f4:8e:38:c7:1b:6e via 10.94.240.1
May 12 16:16:28 dhcp1 dhcpd[2561]: DHCPOFFER on 10.94.240.50 to f4:8e:38:c7:1b:6e (cmc-111) via 10.94.240.1
May 12 16:16:28 dhcp1 dhcpd[2561]: Wrote 0 class decls to leases file.
May 12 16:16:28 dhcp1 dhcpd[2561]: Wrote 0 deleted host decls to leases file.
May 12 16:16:28 dhcp1 dhcpd[2561]: Wrote 0 new dynamic host decls to leases file.
May 12 16:16:28 dhcp1 dhcpd[2561]: Wrote 1 leases to leases file.
May 12 16:16:28 dhcp1 dhcpd[2561]: DHCPREQUEST for 10.94.240.50 (10.81.251.202) from f4:8e:38:c7:1b:6e (cmc-111) via 10.94.240.1
May 12 16:16:28 dhcp1 dhcpd[2561]: DHCPACK on 10.94.240.50 to f4:8e:38:c7:1b:6e (cmc-111) via 10.94.240.1

So at the end, finally fixed…. it took too many hours.

Notes:

  • DHCP Realy: It is not that obvious the flow regarding IPs.
  • UDP and firewalls, debugging it is a bit more challenging.
  • Cisco ASA dhcprelay server IPs…. VIPs and non-VIPs please.

All this would be easier/quicker with TCP 😛

Bash: shell quoting

Another issue I had during the weekend that took me hours. Thanks that I have been reading a bit this book (1.6) and had some clues.

I was trying to test a repo to start an Arista lab using docker and I assumed that everything should work if I followed the instructions. My problem was the script trying to push some basic config to the switches.

This is was the initial function:

#!/usr/bin/env bash
...
function fast_cli() {
  params="${*:2}"
  commands="${params//;/\\\n}"
  docker exec "${1}" bash -c "echo -e ${commands} | FastCli -p15 -e
}
...

If you type that command in a bash shell directly is something like this:

$ docker exec DOCKER_ID bash -c 'echo -e "configure\n hostname sp01\n end\n write\n" | FastCli -p15 -e'

As you can see that differs with what we have inside the bash script. So from the bash script we need to put between ‘ the parameter for -c but inside the parameter we need to use “. So I had to make the change below:

-  docker exec "${1}" bash -c "echo -e ${commands} | FastCli -p15 -e"
+  # need to update this command as the quoting doesnt work in my bash
+  docker exec "${1}" bash -c 'echo -e '"'${commands}'"' | FastCli -p15 -e'

The books says Enclose a string in single quotes ‘ unless it contains elements that you want the shell to interpolate. So let’s divide the solution in parts so can be easier to digest (and remember for me in the future because this will bit me again for sure)

  • 1st part: ‘echo -e ‘
  • 2nd part:
  • 3rd part: ‘${commands}’
  • 4th part:
  • 5th part: ‘ | FastCli -p15 -e’

The ” need to be outside the ‘ region because the commands need to be between ” for the docker command. The 3rd part will expand the variable commands.

I guess the author is using a different version of bash? This is mine

$ bash --version
GNU bash, version 5.0.16(1)-release (x86_64-pc-linux-gnu)
Copyright (C) 2019 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 

This is free software; you are free to change and redistribute it.

LVM 102: pvresize

Something very basic but took me several hours to workout. I had a VM that I wanted to increase a VG as I wanted to create a new LV. I increased the partition in the host server so the PV of the VG had the extra space, but then I couldnt see the increase inside the VM:

[root@HOST]# lvs
  LV      VG  Attr  LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm_data vg_os -wi-ao---- 300.00g

[root@VM]# pvs
  PV         VG      Fmt  Attr PSize    PFree
  /dev/vdb   vg_data lvm2 a--  <200.00g 1020.00m

"fdisk" was telling me the disk was already 300G...

[root@VM ~]# fdisk /dev/vdb

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

The old LVM2_member signature will be removed by a write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xd46fa2fc.

Command (m for help): p
Disk /dev/vdb: 300 GiB, 322122547200 bytes, 629145600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd46fa2fc

I did a pvscan... and nothing. What I was missing? just "pvresize".... and then I can see my extra 100G in the PV and in the VG. So I can create the new LV I wanted...

[root@VM ~]# pvresize /dev/vdb
  Physical volume "/dev/vdb" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized
[root@VM ~]# pvs
  PV         VG      Fmt  Attr PSize    PFree   
  /dev/vdb   vg_data lvm2 a--  <300.00g <101.00g
[root@VM ~]# 
[root@VM ~]# vgs
  VG      #PV #LV #SN Attr   VSize    VFree   
  vg_data   1   1   0 wz--n- <300.00g <101.00g

TCP Congestion Control and Recovery

I have reading this new post from Cloudflare about their congestion control implementations for QUIC.

Reading the article I wanted to check the TCP CCA (Congestion Control Algorithm) available in my laptop (Debian 1o Testing).

So I searched a bit and found a couple of useful links like this:

For checking your current TCP CCA:

# sysctl net.ipv4.tcp_congestion_control
net.ipv4.tcp_congestion_control = cubic

$ cat /proc/sys/net/ipv4/tcp_congestion_control
cubic

For checking the available TCP CCAs:

# sysctl net.ipv4.tcp_available_congestion_control
net.ipv4.tcp_available_congestion_control = reno cubic

As well, you can see via “ss” the CCA per connection:

$ ss -ti
...
tcp   ESTAB      0       0                                     192.168.1.158:60238                     169.54.204.232:https       
	 cubic wscale:7,7 rto:320 rtt:116.813/2.428 ato:40 mss:1448 pmtu:1500 rcvmss:1448 advmss:1448 cwnd:10 bytes_sent:4366 bytes_acked:4367 bytes_received:7038 segs_out:98 segs_in:183 data_segs_out:91 data_segs_in:93 send 991.7Kbps lastsnd:1260 lastrcv:1260 lastack:1140 pacing_rate 2.0Mbps delivery_rate 102.2Kbps delivered:92 app_limited busy:10632ms rcv_space:14480 rcv_ssthresh:64088 minrtt:113.391
...

If you want to change your TCP CCA, this is a good link:

Check the modules installed:

$ ls -la /lib/modules/$(uname -r)/kernel/net/ipv4

Check the kernel config:

$ grep TCP_CONG /boot/config-$(uname -r)
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=m
CONFIG_TCP_CONG_CUBIC=y
CONFIG_TCP_CONG_WESTWOOD=m
CONFIG_TCP_CONG_HTCP=m
CONFIG_TCP_CONG_HSTCP=m
CONFIG_TCP_CONG_HYBLA=m
CONFIG_TCP_CONG_VEGAS=m
CONFIG_TCP_CONG_NV=m
CONFIG_TCP_CONG_SCALABLE=m
CONFIG_TCP_CONG_LP=m
CONFIG_TCP_CONG_VENO=m
CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_CONG_DCTCP=m
CONFIG_TCP_CONG_CDG=m
CONFIG_TCP_CONG_BBR=m
CONFIG_DEFAULT_TCP_CONG="cubic"

We can see that “cubic” is the default TCP CCA and we have for example BBR available as a module.

So let’s change to BBR (rfc, github, blog) based on this link:

Check the kernel supports BBR:

$ cat /boot/config-$(uname -r) | grep 'CONFIG_TCP_CONG_BBR'
CONFIG_TCP_CONG_BBR=m
$ cat /boot/config-$(uname -r) | grep 'CONFIG_NET_SCH_FQ'
CONFIG_NET_SCH_FQ_CODEL=m
CONFIG_NET_SCH_FQ=m

Enable TCP BBR:

# vi /etc/sysctl.conf
net.core.default_qdisc=fq
net.ipv4.tcp_congestion_control=bbr

Apply the changes:

# sysctl --system

And check:

$ cat /proc/sys/net/ipv4/tcp_congestion_control
bbr

So we have moved from CUBIC to BBR. Let’s see how is the experience in the following days.

Vim + Python Linters

I was reading an article about tools to write python using VIM with good formatting. I am not a pro-user of VIM neither a pro-python programmer but I would like to be more efficient and write better formatted python code.

So this is the link I was reading and ended here for the specific details.

At the end, my goal is to use more often inside VIM: splits (:sp), nerdtree (file browsing), autocompletion, git and be sure my code is formatted automatically if I make a mistake.

So I enabled most of the plugins from the article although I made some tweaks for me normal usage (I already had enabled some pluggins for jinja2). This is my .vimrc:

set nocompatible              " be iMproved, required
filetype off                  " required

" set the runtime path to include Vundle and initialize
set rtp+=~/.vim/bundle/Vundle.vim

" https://realpython.com/vim-and-python-a-match-made-in-heaven/#syntax-checkinghighlighting
set splitbelow
set splitright
"split navigations
nnoremap  
nnoremap  
nnoremap  
nnoremap  

" Enable folding
set foldmethod=indent
set foldlevel=99
" Enable folding with the spacebar
nnoremap  za

" Automatic formating for tab, whitespace and max 80 chars per line, etc
au BufNewFile,BufRead *.py
    \ set tabstop=4 |
    \ set softtabstop=4 |
    \ set shiftwidth=4 |
    \ set textwidth=79 |
    \ set expandtab |
    \ set autoindent |
    \ set fileformat=unix

highlight BadWhitespace ctermbg=red guibg=darkred
au BufRead,BufNewFile *.py,*.pyw,*.c,*.h match BadWhitespace /\s\+$/

set encoding=utf-8

let python_highlight_all=1

" VUNDLE PLUGINGS

call vundle#begin()
" alternatively, pass a path where Vundle should install plugins
" call vundle#begin('~/some/path/here')

" let Vundle manage Vundle, required
Plugin 'VundleVim/Vundle.vim'
" added nerdtree
Plugin 'scrooloose/nerdtree'

Plugin 'ctrlp.vim'

Plugin 'Jinja'

Plugin 'tmhedberg/SimpylFold'

Plugin 'vim-scripts/indentpython.vim'

Plugin 'vim-syntastic/syntastic'

Plugin 'nvie/vim-flake8'

Plugin 'tpope/vim-fugitive'

Plugin 'Lokaltog/powerline', {'rtp': 'powerline/bindings/vim/'}

" Keep Plugin commands between vundle#begin/end.
" All of your Plugins must be added before the following line
call vundle#end()            " required



filetype plugin indent on    " required
syntax on

nmap  :NERDTreeToggle

" To ignore plugin indent changes, instead use:
"filetype plugin on
"
" Brief help
" :PluginList       - lists configured plugins
" :PluginInstall    - installs plugins; append `!` to update or just :PluginUpdate
" :PluginSearch foo - searches for foo; append `!` to refresh local cache
" :PluginClean      - confirms removal of unused plugins; append `!` to auto-approve removal
"
" see :h vundle for more details or wiki for FAQ
" Put your non-Plugin stuff after this line


au BufNewFile,BufRead *.lmx set filetype=xml
au BufNewFile,BufRead *.dump set filetype=sql
au BufNewFile,BufRead *.j2 set filetype=jinja

"for case-insensitve searches"
set ignorecase

"Override the 'ignorecase' option if the search pattern contains upper"
"case characters.  Only used when the search pattern is typed and"
"'ignorecase' option is on."
set smartcase

" I want to be able to resize the splits quickly so I want the mouse on
set mouse=a

" Always show statusline - This makes powerline always on
set laststatus=2

" autocmd vimenter * NERDTree

BTW, a quick reference for NerdTree (file browser) here.

Let’s see how I get on with these changes.

Git Basics

I like git, I use it, but of course, I am not an expert. And everytime I want to do something outside my comfort zone, I have to serch for help. Will try to add expamples. Most of them will be obvius for most people.

  • I want to see the differences between the files I have changed (before commit) and the last commit. Thanks to stackoverflow:
~ git diff

Presigned URLs in S3

Image by ArtTower from Pixabay

S3 is the Amazon service to store files in the cloud. It is reliable, very reliable, the expected time to lost a single file from a group of 10 million of them is 10000 years. Even other services on Amazon uses internally S3 to store its files. On the bad side, as it is one of the first services that Amazon created, it can be a headache to fine grain permissions form all its capabilites and evolutions, making it difficult to be sure that a file is not accesible for those that should not be allowed.

In S3 you can define what they call a bucket, which is like a directory in a filesystem. The name of the bucket must be unique, not only in your account but in the global namespace from all AWS accounts in the world. That means you have to be creative when picking a bucket name.

A bucket can be private or publicly accessible. In the public side, one of the special uses is to serve static content from as a web server, even html pages from your custom domain. But what if you want to allow users to download files, for example an image, and you don’t want the user to be able to make it public sharing the link to the image?

I’ve played today with a very useful feature for that case. It allows to have a private bucket that can temporary allow the access to a single file to GET or even PUT/POST for a limited amount of time. You’ll need to use AWS SDK of your favourite supported programming language or AWS CLI from command line, to query AWS API for a temporary authorized url. Let’s see how with an example from scratch, installing and using AWS CLI in a Debian based environment.

Make sure you have access to an AWS account (you already have one if you have an amazon.com account) and generate a pair of AWS Access Key and AWS Secret Access Key from web console.

$> sudo apt instal awscli
$> aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: eu-west-1
Default output format [None]:

Create a local file called piticli with the content you prefer. Let’s create also a new S3 bucket using aws cli

# Create a convenience environment variable with a kind of random bucket name
$> BN="s3://thomarite-blog-test-$RANDOM"
# Let's actually create the bucket
$> aws s3 mb $BN 
make_bucket: thomarite-blog-test-1337
# Let's see it exists
$> aws s3 ls
2020-04-16 23:01:27 thomarite-blog-test-1337
# Now let's upload piticli into the new bucket
$> aws s3 cp piticli $BN
2020-04-17 23:01:45          26 piticli

Now let’s create a presigned url for piticli and store it in PRESIGNED_URL env var. As you can see, the temporary URL includes the bucket name, the file name and new AWS Access Key and signature, and a hint about the expiration date.

# Store the URL into a env var for future use
$> PRESIGNED_URL=$(aws s3 presign $BN/piticli)
$> echo $PRESIGNED_URL
https://s3.eu-west-1.amazonaws.com/thomarite-blog-test-1337/piticli?AWSAccessKeyId=AKIAYSFFLHZCQSEPMZEF&Signature=x%2BWzELvYpzdVipOd67ez0z3Esws%3D&Expires=1587077637

That’s the public url and will be valid for 1h by default. You can set the expiration time in aws s3 presign command using the parameter --expires-in and set the seconds allowed until it expires.

Now you have a public url accessible by any browser. Let’s open it via curl:

$> curl -Ls $PRESIGNED_URL
piticli is now… sleeping

And finally to clean things up let’s remove all the files and the bucket in AWS

$> aws s3 rb --force $BN
delete: s3://thomarite-blog-test-1337/piticli
remove_bucket: thomarite-blog-test-1337