Mobile Phone + SSH server

I have tried many times to connect my mobile phones to my laptop. It always looks easy if you use M$ but with Linux I always fail, I can’t get to work MTP. So now I really want to take mainly all my pictures from a phone and be able to back them up and transfer to a new one. I dont want to use cloud services or tools from the manufacturers. I want to use old school methods. So after struggling for some time, I somehow decided to use something as old school as SSH/SCP. Android is based in linux, isn’t it? So I searched for a free SSH server app, found this one. And it worked! I managed to understand it, created my user, my mounting points, enable it… and was able to SCP all my photos from my mobile to my laptop. It worked with Samsung and Huawei.

I am pretty sure that people know have better ways to do this… but that’s me.

ARP Storms – EVPN

We have had an issue with broadcast storms in our network. Checking the CoPP setup in the switches, we could see massive drops of ARP. This is a good link to know how to check CoPP drops in NXOS.

N9K:# show copp status
N9K# show policy-map interface control-plane | grep 'dropped [1-9]' | diff

Having so many ARP drops by CoPP is bad because very likely good ARP requests are going to be dropped.

Initially i thought it was related to ARP problems in EVPN like this link. But after taking a packet capture in a switch from an interface connected to a server, I could see that over 90% ARP traffic coming from the server was not getting a reply…. Checking in different switches, I could see the same pattern all over the place.

So why the server was making so many ARP requests?

After some time, managed to help help from a sysadmin with access to the servers so could troubleshoot the problem.

But, how do you find the process that is triggering the ARP requests? I didnt make the effort to think about it and started to search for an easy answer. This post gave me a clue.

ss does show you connections that have not yet been resolved by arp. They are in state SYN-SENT. The problem is that such a state is only held for a few seconds then the connection fails, so you may not see it. You could try rapid polling for it with

while ! ss -p state syn-sent | grep 1.1.1.100; do sleep .1; done

Somehow I couldnt see anything anything with “ss” so tried netstat as it shows you too the status of the TCP connection (I wonder what would happen is the connection was UDP instead???)

Initially I tried “netstat -a” and it was too slow to show me “SYN-SENT” status

Shame on me, I had to search how to get to show the ports quickly here:

watch netstat -ntup | grep -i syn_sent | awk '{print $4,$5,$6,$7}'

It was slow because it was trying to resolve all IPs to hostname…. :facepalm. Tha is fixed with “-n” (no-resolve)

Anyway, with the command above, finally managed to see the process that were in “SYN_SENT” state

This is not the real thing, just an example:

#  netstat -ntup | grep -i syn_sent 
tcp        0      1 192.168.1.203:35460     4.4.4.4:23              SYN_SENT    98690/telnet        
# 

We could see that the destination port was TCP 179, so something in the node was trying to talk BGP! They were “bird” processes. As the node belonged to a kubernetes cluster, we could see a calico container as CNI. Then we connected to the container and tried to check the bird config. We could see clearly the IPs that dont get ARP reply were configured there.

So in summary, basic TCP:

Very summarize, TCP is L4, then goes down to L3 IP. For getting to L2, you need to know the MAC of the IP, so that triggers the ARP request. Once the MAC is learned, it is cached for the next request. For that reason the first time you make a connection is slow (ping, traceroute, etc)

Now we need to workout why the calico/bird config is that way. Fix it to only use IPs of real BGP speakers and then verify the ARP storms stop.

Hopefully, I will learn a bit about calico.

Notes for UDP:

If I generate an UDP connection to a non-existing IP

$ nc -u 4.4.4.4 4000

netstat tells me the UDP connection is established and I can’t see anything in the ARP table for an external IP, for an internal IP (in my own network) I can see an incomplete entry. Why?

#  netstat -ntup | grep -i 4.4.4.4
udp        0      0 192.168.1.203:42653     4.4.4.4:4000            ESTABLISHED 102014/nc           
# 
#  netstat -ntup | grep -i '192.168.1.2:'
udp        0      0 192.168.1.203:44576     192.168.1.2:4000        ESTABLISHED 102369/nc           
# 
#
# arp -a
? (192.168.1.2) at <incomplete> on wlp2s0
something.mynet (192.168.1.1) at xx:xx:xx:yy:yy:zz [ether] on wlp2s0
# 

# tcpdump -i wlp2s0 host 4.4.4.4
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on wlp2s0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
23:35:45.081819 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:45.081850 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:46.082075 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:47.082294 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
23:35:48.082504 IP 192.168.1.203.50186 > 4.4.4.4.4000: UDP, length 1
^C
5 packets captured
5 packets received by filter
0 packets dropped by kernel
# 
  • UDP is stateless so we can’t have states…. so it is always going to be “established”. Basic TCP/UDP
  • When trying to open an UDP connection to an external IP, you need to “route” so my laptop knows it needs to send the UDP connection to the default gateway, so when getting to L2, the destination MAC address is not 4.4.4.4 is the default gateway MAC. BASIC ROUTING !!!! For that reason you dont see 4.4.4.4 in ARP table
    • When trying to open an UDP connection to a local IP, my laptop knows it is in the same network so it should be able to find the destination MAC address using ARP.

Convert Images

I thought it would be easier to save a PNG file as JPG but I failed. I was pretty sure it should be a standard linux command for that. Naive.

Ok, so found something that does the job:

$ sudo aptitude install imagemagick
$ convert pic.png pic.jpg

apt-key deprecation

While updating Debian, I have seen this warning in the last days:

Fetched 11.4 kB in 3s (3,605 B/s)
W: http://www.deb-multimedia.org/dists/testing/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: http://deb.torproject.org/torproject.org/dists/testing/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
                          

I did read the apt-key manual but I wasn’t very clear how to proceed. So I searched for a bit and found this article. And it was exactly what I needed.

$ sudo apt-key list
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
/etc/apt/trusted.gpg
--------------------
pub   rsa4096 2014-03-05 [SC]
      A401 FF99 368F A1F9 8152  DE75 5C80 8C2B 6555 8117
uid           [ unknown] Christian Marillat <marillat@debian.org>
uid           [ unknown] Christian Marillat <marillat@free.fr>
uid           [ unknown] Christian Marillat <marillat@deb-multimedia>
uid           [ unknown] Christian Marillat <marillat@deb-multimedia.org>
sub   rsa4096 2014-03-05 [E]

pub   rsa2048 2009-09-04 [SC] [expires: 2024-11-17]
      A3C4 F0F9 79CA A22C DBA8  F512 EE8C BC9E 886D DD89
uid           [ unknown] deb.torproject.org archive signing key
sub   rsa2048 2009-09-04 [S] [expires: 2022-06-11]
...
...

Export the keys:

$ sudo apt-key export 65558117 | sudo gpg --dearmour -o /usr/share/keyrings/repo-debian-multimedia-testing.gpg 
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
$
 
 
$ sudo apt-key export 886DDD89 | sudo gpg --dearmour -o /usr/share/keyrings/repo-torproject-testing.gpg 
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
$ 

BTW, something I keep forgetting is what part of the pub key I needed. It is the last 8 digits (that you can see in the output of apt-key list). And that was mentioned in the article but I didnt pay attention…

Now update “/etc/apt/sources.list” adding “signed-by=/path to file created above” for each repo:

###Debian Multimedia
deb [arch=amd64 signed-by=/usr/share/keyrings/repo-debian-multimedia-testing.gpg] http://www.deb-multimedia.org testing main non-free

###TOR
deb [arch=amd64 signed-by=/usr/share/keyrings/repo-torproject-testing.gpg] http://deb.torproject.org/torproject.org testing main

Update and see if warning is gone:

# aptitude update 
Hit http://security.debian.org/debian-security testing-security InRelease
Hit http://deb.debian.org/debian testing InRelease                                                         
Ign https://apt.fury.io/netdevops  InRelease
Ign https://apt.fury.io/netdevops  Release
Hit http://www.deb-multimedia.org testing InRelease
Hit https://dl.google.com/linux/chrome/deb stable InRelease                                                                                       
Hit https://packages.cloud.google.com/apt cloud-sdk InRelease        
Hit http://deb.torproject.org/torproject.org testing InRelease
Get: 1 https://apt.fury.io/netdevops  Packages
Ign https://apt.fury.io/netdevops  Translation-en_GB
Ign https://apt.fury.io/netdevops  Translation-en
Ign https://apt.fury.io/netdevops  Contents (deb)
Ign https://apt.fury.io/netdevops  Contents (deb)
Fetched 11.4 kB in 3s (3,650 B/s)
                                         
# 

All good

And clean-up before finishing:

$ sudo apt-key del 65558117
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
$ sudo apt-key del 886DDD89
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
$ 

youtube-dl extract specific audio portion

I was watching a concert and I wanted to take just the audio of a song, no video. I knew you could download the full audio from videos pretty easily with youtube-dl but now just wanted an specific portion. Thanks to these links (link1 and link2) I managed to get what I wanted:

$ youtube-dl --youtube-skip-dash-manifest -g "VIDEO_URL"

# copy the second url (audio) from the above command output

$ audio_url="AUDIO_URL_FROM_ABOVE"

$ ffmpeg -i "$audio_url" -ss 00:00:30 -t 00:05:20.0 -q:a 0 -map a sample.mp3

# PLAY IT!

$ vlc sample.mp3

Debian Repository Keys + bits

Since I had to reinstall my laptop, I have had to tune missing things. One of them was when updating Debian I was constantly having errors with two repositories so I couldn’t get the packages from there. I have been lazy because it wasn’t stopping me for doing anything but I decided to fix that. I have seen this before so it is not totally new but I was surprised as I couldn’t “fix” the key for the Debian Tor repository.

The error for getting the key for “www.deb-multimedia.org” was fixed following this post:

# apt-key adv --keyserver keyring.debian.org --recv-keys 5C808C2B65558117

I tried similar approach for “deb.torproject.org” but it failed. I checked the official way to use that repo here. It was a bit different as I do currently as I use the “sources.list” and the post recommends to create a dedicated file. I didn’t pay much attention to it and tried to follow those instructions but using my current config setup. It was still failing. I checked the repo was real. I tried to use a public keyring (based on this) but same result. But at the end I found the solution here:

# wget -q https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc -O- | sudo apt-key add -

After that, my “apt update” didn’t show anymore errors.

And then I noticed why my setup didnt work with the official instructions of Tor Project.

The documentations says to create a new file with this line:

deb     [signed-by=/usr/share/keyrings/tor-archive-keyring.gpg] https://deb.torproject.org/torproject.org testing main

And then add the key:

# wget -qO- https://deb.torproject.org/torproject.org/A3C4F0F979CAA22CDBA8F512EE8CBC9E886DDD89.asc | gpg --dearmor | tee /usr/share/keyrings/tor-archive-keyring.gpg >/dev/null

But I have only this in my sources.list:

##### 3rd Party Binary Repos
###Debian Multimedia
deb [arch=amd64] http://www.deb-multimedia.org testing main non-free
#deb [arch=amd64,i386] http://www.deb-multimedia.org buster main non-free

###TOR
deb [arch=amd64] http://deb.torproject.org/torproject.org testing main
#deb-src [arch=amd64] http://deb.torproject.org/torproject.org testing main

So I wasn’t doing the same as I thought.

And somehow I forgot how to scroll using the keyboard with Terminator….and I was sure it worked before. I checked the keysetting and couldnt find anything. I thought something was misconfigured. Then I searched and found this. So as each laptop has a different keyboard setup, I noticed the “shift + PageUp” was actually in my keyboard “shift + Fn + PageUp”.

And after sooooo many years, I decided to add spell check for Spanish in GC.

tty scrollback – tmux

One of the things I had in my to-learn list after rebuilding my laptop was how to scrollback using the tty console (Ctr+F1, etc). I searched and this gave some hope. I tried to see how to do it in Debian as the steps mentioned looked like for Fedora only. This new link looked promising but no joy.

It seems the scrollback support was dropped from kernet 5.9 onwards based on this link. The lack of a maintainer was the main reason (there were security issues that needed attention). I run 5.15.

But as workaround, you can use “tmux” when in the tty and use its scrollback option. tmux is a tool that I would like to learn 🙁 I normally use “terminator”. Although I can use both…

How to scrollback in tmux? Here. So “ctrl+b” then [. Then you can use Fn+PgUp in my case to go up one page. It

A bit of history about Linux console scrollback.

Use ZFS

As part of my reinstallation, I had to create a ZFS partition that I used to use for personal storage. Debian Installation process doesnt provide this option, so I have to do it manually. To be honest, it is good to remember/refresh these “basic” things, you never know when you are going to need them (urgently very likely).

As the installation process gave most of the space to the “home” partition, that’s the one I need to take space for creating my ZFS partition. I chose LVM during installation so I dont really have to deal with physical partition, it is mainly logical volumes aka “lv”.

So I rebooted in single-mode as I wanted to be sure that I didnt damage anything and I had to umount the “home” lv. So as root:

Check mounted partitions
# df -hT

Checks LV summary
# lvs

Umount /home
# umount /home/

Check "home" is not munted
# df -hT

Check VolgumeGroup summary
# vgs

Perform filesystem check before making any change
# e2fsck -fy /dev/mapper/athens--vg-home

Resize filesystem to 22G
# resize2fs /dev/mapper/athens--vg-home 22G

Check LV hasnt changed
# lvs

Reduce LV for home to 22G
# lvreduce -L 22G /dev/mapper/athens--vg-home

Check LV home is reduced
# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home    athens-vg -wi-ao----  22.00g                                                    
  root    athens-vg -wi-ao---- <27.94g                                                    
  swap_1  athens-vg -wi-ao---- 976.00m                                                    
# 

Check you have free space in the VG
# vgs
  VG        #PV #LV #SN Attr   VSize   VFree   
  athens-vg   1   3   0 wz--n- 237.48g <186.59g
# 

Reboot to be sure everything is fine
# reboot

Check all partitions are mounted and "home" is just 22G
$ df -hT
Filesystem                  Type      Size  Used Avail Use% Mounted on
udev                        devtmpfs  3.9G     0  3.9G   0% /dev
tmpfs                       tmpfs     786M  1.6M  785M   1% /run
/dev/mapper/athens--vg-root ext4       28G  6.7G   20G  26% /
tmpfs                       tmpfs     3.9G   87M  3.8G   3% /dev/shm
tmpfs                       tmpfs     5.0M  8.0K  5.0M   1% /run/lock
/dev/sda2                   ext2      456M   72M  360M  17% /boot
/dev/mapper/athens--vg-home ext4       21G  3.0G   17G  16% /home
/dev/sda1                   vfat      496M   64M  433M  13% /boot/efi
tmpfs                       tmpfs     786M   40K  786M   1% /run/user/1000
$

Create new LV "storage" using the spare space in the VG
# lvcreate -L 186G -n storage athens-vg
  Logical volume "storage" created.
# 

Check VG space has reduced
# vgs
  VG        #PV #LV #SN Attr   VSize   VFree  
  athens-vg   1   4   0 wz--n- 237.48g 604.00m
#

Check we have a new LV storage of 186G
# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  home    athens-vg -wi-ao----  22.00g                                                    
  root    athens-vg -wi-ao---- <27.94g                                                    
  storage athens-vg -wi-a----- 186.00g                                                    
  swap_1  athens-vg -wi-ao---- 976.00m                                                    
#

Create our Zpool storage using the LV storage.
# zpool create storage /dev/mapper/athens--vg-storage 

Check Zpool status
# zpool status
  pool: storage
 state: ONLINE
config:

	NAME                  STATE     READ WRITE CKSUM
	storage               ONLINE       0     0     0
	  athens--vg-storage  ONLINE       0     0     0

errors: No known data errors
#

Check mount point for ZFS pool
# zfs get mountpoint storage
NAME     PROPERTY    VALUE       SOURCE
storage  mountpoint  /storage    default
# 

Change Zpool storage mount point to a point in my home dir
# zfs set mountpoint=/home/tomas/storage storage

Check ZFS list
# zfs list
NAME      USED  AVAIL     REFER  MOUNTPOINT
storage   165K   179G       24K  /home/yo/storage
# 

Check all partitions
$ df -hT
Filesystem                  Type      Size  Used Avail Use% Mounted on
udev                        devtmpfs  3.9G     0  3.9G   0% /dev
tmpfs                       tmpfs     786M  1.6M  785M   1% /run
/dev/mapper/athens--vg-root ext4       28G  6.7G   20G  26% /
tmpfs                       tmpfs     3.9G   87M  3.8G   3% /dev/shm
tmpfs                       tmpfs     5.0M  8.0K  5.0M   1% /run/lock
/dev/sda2                   ext2      456M   72M  360M  17% /boot
/dev/mapper/athens--vg-home ext4       21G  3.0G   17G  16% /home
/dev/sda1                   vfat      496M   64M  433M  13% /boot/efi
tmpfs                       tmpfs     786M   40K  786M   1% /run/user/1000
storage                     zfs       180G  128K  180G   1% /home/y/storage
$ 

I have used these links to refresh myself:

  • lvs resize: https://www.rootusers.com/lvm-resize-how-to-decrease-an-lvm-partition/
  • create lv: https://www.thegeekstuff.com/2010/08/how-to-create-lvm/
  • create zfs pool: https://ubuntu.com/tutorials/setup-zfs-storage-pool#3-creating-a-zfs-pool
  • change zfs mount point: https://docs.oracle.com/cd/E19253-01/819-5461/gaztn/index.html

To be honest, I thought I was going to struggle much more but it has been quick.

Step by step getting back to my normal environment (and trying to improve it). I said it before, I should be able to reinstall my laptop easily, like a production server….

mutt+gmail

Using mutt for sending emails via my gmail account has been something I wanted to do for a long time. After my last issue with my laptop, finally I decided to learn how to do it.

Thanks to these blogs I managed to get it working!!!

For the main setup, this link and this. For overcoming the authentication issue, this link. So you define a new password for an app in your google account as I use 2FA.

sudo aptitude install mutt

mkdir ~/.mutt

vim ~/.mutt/muttrc

This is the content of my file:

set from = "youremail@gmail.com"
set realname = "Name Surname"

# IMAP settings
set imap_user = "youremail@gmail.com"
set imap_pass = "your_new_app_password"

# SMTP settings
set smtp_url = "smtps://youremail@smtp.gmail.com"
set smtp_pass = "your_new_app_password"

# Remote Gmail folders
set folder = "imaps://imap.gmail.com/"
set spoolfile = "+INBOX"
set postponed = "+[Gmail]/Drafts"
set trash = "+[Gmail]/Trash"

# Composition
set editor = "vim"
set edit_headers = yes
set charset = UTF-8

This is the error I had before getting the app password:

$ echo "Example mutt+gmail" | mutt -s "Testing mutt+gmail" youremail@gmail.com -a test.txt
SASL authentication failed
Could not send the message.
$ 

After that. Email sent fine without error and I can see it in my inbox!

$ echo "Example mutt+gmail v2" | mutt -s "Testing mutt+gmail v2" youremail@gmail.com -a books.ods 
$ 

The only thing I dont like is I need to have a password in a text file….

So let’s use chmod so, at least, only me can read the file.

~/.mutt$ chmod og-r muttrc
~/.mutt$ ls -ltr
total 4
-rw------- 1 yy yy 687 Oct 26 23:22 muttrc
~/.mutt$ 

Although, Ideally, I would prefer to use a certificate that is only valid for gmail, but I haven’t been able to find anything related to this.