$ sudo modprobe bonding
$ ip link help bond
$ sudo ip link add bond0 type bond mode 802.3ad
$ sudo ip link set eth0 master bond0
$ sudo ip link set eth1 master bond0
Bridging: vlans + trunks
ip neigh show // l2 table
ip route show // l3 table
ip route add default via 192.168.1.1 dev eth1
sudo modprobe 8021q
// create bridge and add links to bridge (switch)
sudo ip link add br0 type bridge vlan_filtering 1 // native vlan = 1
sudo ip link set eth1 master br0
sudo ip link set eth2 master br0
sudo ip link set eth3 master br0
// make eth1 access port for v11
sudo bridge vlan add dev eth1 vid 11 pvid untagged
// make eth3 access port for v12
sudo bridge vlan add dev eth3 vid 12 pvid untagged
// make eth2 trunk port for v11 and v12
sudo bridge vlan add dev eth2 vid 11
sudo bridge vlan add dev eth2 vid 12
// enable bridge and links
sudo ip link set up dev br0
sudo ip link set up dev eth1
sudo ip link set up dev eth2
sudo ip link set up dev eth3
bridge link show
bridge vlan show
bridge fdb show
VxLAN
I havent tried this yet:
Linux System 1
sudo ip link add br0 type bridge vlan_filtering 1
sudo ip link add vlan10 type vlan id 10 link bridge protocol none
sudo ip addr add 10.0.0.1/24 dev vlan10
sudo ip link add vtep10 type vxlan id 1010 local 10.1.0.1 remote 10.3.0.1 learning
sudo ip link set eth1 master br0
sudo bridge vlan add dev eth1 vid 10 pvid untagged
Linux System 2
sudo ip link add br0 type bridge vlan_filtering 1
sudo ip link add vlan10 type vlan id 10 link bridge protocol none
sudo ip addr add 10.0.0.2/24 dev vlan10
sudo ip link add vtep10 type vxlan id 1010 local 10.3.0.1 remote 10.1.0.1 learning
sudo ip link set eth1 master br0
sudo bridge vlan add dev eth1 vid 10 pvid untagged
This is something I wanted to try for some time. Normally for networks monitoring you use a NMS tool. They can be expensive, free or cheap. I have seen/used Observium and LibreNMS. And many years ago Cacti. There are other tools that can do the job like Zabbix/Nagios/Icinga.
But it seems time-series-databases are the new standard. They give you more flexibility as you can create queries and graph them.
I decided for InfluxDB-Telegraf-Grafana stuck as I could find quickly info based on scenarios of networks.
What is the rule of eachc one:
Telegraf: collect data InfluxDB: store data Grafana: visualize
My main source is again Anton’s blog. All credits to him.
Environment
My network is just 3 Arista ceos containers via docker. All services will run as containers so you need docker installed. Everything is IPv4.
InfluxDB
Installation:
// Create directories
mkdir telemetry-example/influxdb
cd telemetry-example/influxdb
// Get influxdb config
docker run --rm influxdb influxd config > influxdb.conf
// Create local data folder for influxdb that we will map
mkdir data
ls -ltr
// Check docker status
docker images
docker ps -a
// Create docker instance for influxdb. Keep in mind that I am giving a name to the instance
docker run -d -p 8086:8086 -p 8088:8088 --name influxdb \
-v $PWD/influxdb.conf:/etc/influxdb/influxdb.conf:ro \
-v $PWD/data:/var/lib/influxdb \
influxdb -config /etc/influxdb/influxdb.conf
// Verify connectivity
curl -i http://localhost:8086/ping
// Create database "test" using http-query (link below for more details)
curl -XPOST http://localhost:8086/query --data-urlencode "q=CREATE DATABASE test"
{"results":[{"statement_id":0}]} <-- command was ok!
// Create user/pass for your db.
curl -XPOST http://localhost:8086/query --data-urlencode "q=CREATE USER xxx WITH PASSWORD 'xxx123' WITH ALL PRIVILEGES"
{"results":[{"statement_id":0}]} <-- command was ok!
// Create SSL cert for influxdb
docker exec -it influxdb openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/ssl/influxdb-selfsigned.key -out /etc/ssl/influxdb-selfsigned.crt -days 365 -subj "/C=GB/ST=LDN/L=LDN/O=domain.com/CN=influxdb.domain.com"
// Update influxdb.conf for SSL
telemetry-example/influxdb$ vim influxdb.conf
…
https-enabled = true
https-certificate = "/etc/ssl/influxdb-selfsigned.crt"
https-private-key = "/etc/ssl/influxdb-selfsigned.key"
…
// Restart influxdb to take the changes
docker restart influxdb
// Get influxdb IP for using it later
docker container inspect influxdb --format='{{ .NetworkSettings.IPAddress }}'
172.17.0.2
// Verify connectivity via https
curl -i https://localhost:8086/ping --insecure
The verification for HTTPS was a bit more difficult because the result was always correct no matter what query I was running:
So I decided to see if there was cli/shell for the influxdb (like in mysql, etc). And yes, there is one. Keep in mind that you have to use “-ssl -unsafeSsl” at the same time! That confused me a lot.
$ docker exec -it influxdb influx -ssl -unsafeSsl
Connected to https://localhost:8086 version 1.8.1
InfluxDB shell version: 1.8.1
> show databases
name: databases
name
_internal
test
> use test
Using database test
> show series
key
cpu,cpu=cpu-total,host=5f7aa2c5550e
Links about influxdb that are good for the docker creation and the http queries:
// Create dir
mkdir telemetry-example/telegraf
cd telemetry-example/telegraf
// Get config file to be modified
docker run --rm telegraf telegraf config > telegraf.conf
// Add the details of influxdb in telegraf.conf. As well, you need to add the devices you want to poll. In my case 172.23.0.2/3/4.
vim telegraf.conf
....
[[outputs.influxdb]]
urls = ["https://172.17.0.2:8086"]
database = "test"
skip_database_creation = false
## Timeout for HTTP messages.
timeout = "5s"
## HTTP Basic Auth
username = "xxx"
password = "xxx123"
## Use TLS but skip chain & host verification
insecure_skip_verify = true
# Retrieves SNMP values from remote agents
[[inputs.snmp]]
## Agent addresses to retrieve values from.
## example: agents = ["udp://127.0.0.1:161"]
## agents = ["tcp://127.0.0.1:161"]
agents = ["udp://172.23.0.2:161","udp://172.23.0.3:161","udp://172.23.0.4:161"]
#
## Timeout for each request.
timeout = "5s"
#
## SNMP version; can be 1, 2, or 3.
version = 2
#
## SNMP community string.
community = "tomas123"
#
## Number of retries to attempt.
retries = 3
This is the SNMP config I added below the SNMPv3 options in [[inputs.snmp]]
# ## Add fields and tables defining the variables you wish to collect. This
# ## example collects the system uptime and interface variables. Reference the
# ## full plugin documentation for configuration details.
[[inputs.snmp.field]]
name = "hostname"
oid = "RFC1213-MIB::sysName.0"
is_tag = true
[[inputs.snmp.field]]
name = "uptime"
oid = "DISMAN-EVENT-MIB::sysUpTimeInstance"
# IF-MIB::ifTable contains counters on input and output traffic as well as errors and discards.
[[inputs.snmp.table]]
name = "interface"
inherit_tags = [ "hostname" ]
oid = "IF-MIB::ifTable"
# Interface tag - used to identify interface in metrics database
[[inputs.snmp.table.field]]
name = "ifDescr"
oid = "IF-MIB::ifDescr"
is_tag = true
# IF-MIB::ifXTable contains newer High Capacity (HC) counters that do not overflow as fast for a few of the ifTable counters
[[inputs.snmp.table]]
name = "interfaceX"
inherit_tags = [ "hostname" ]
oid = "IF-MIB::ifXTable"
# Interface tag - used to identify interface in metrics database
[[inputs.snmp.table.field]]
name = "ifDescr"
oid = "IF-MIB::ifDescr"
is_tag = true
# EtherLike-MIB::dot3StatsTable contains detailed ethernet-level information about what kind of errors have been logged on an interface (such as FCS error, frame too long, etc)
[[inputs.snmp.table]]
name = "interface"
inherit_tags = [ "hostname" ]
oid = "EtherLike-MIB::dot3StatsTable"
# Interface tag - used to identify interface in metrics database
[[inputs.snmp.table.field]]
name = "name"
oid = "IF-MIB::ifDescr"
is_tag = true
For more info about the SNMP config in telegraf. These are good links. This is the official github page. And this is the page for SNMP input plugin that explain the differences between “field” and “table”.
As well, the link below is really good too for explaining the SNMP config in telegraf:”Gathering Data via SNMP”
docker logs telegraf -f
...
2020-07-17T12:45:10Z E! [inputs.snmp] Error in plugin: initializing table interface: translating: exit status 2: MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf:/usr/share/mibs/site:/usr/share/snmp/mibs:/usr/share/mibs/iana:/usr/share/mibs/ietf:/usr/share/mibs/netsnmp
Cannot find module (EtherLike-MIB): At line 0 in (none)
EtherLike-MIB::dot3StatsTable: Unknown Object Identifier
...
You will see errors about not able to find the MIB files! So I used Librenms mibs. I download the project and copied the MIBS I thought I needed (arista and some other that dont belong to a vendor). As well, this is noted by Anton’s in this link:
I have seen Grafana before but I have never used it so the configuration on queries was a bit of a challenge but I was lucky and I found very good blogs for that. The installation process is ok:
// Create folder for grafana and data
mkdir -p telemetry-example/grafana/data
cd telemetry-example/grafana
// Create docker instance
docker run -d -p 3000:3000 --name grafana \
--user root \
-v $PWD/data:/var/lib/grafana \
grafana/grafana
// Create SSL cert for grafana
docker exec -it grafana openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/ssl/grafana-selfsigned.key -out /etc/ssl/grafana-selfsigned.crt -days 365 -subj "/C=GB/ST=LDN/L=LDN/O=domain.com/CN=grafana.domain.com"
// Copy grafana config so we can update it
docker cp grafana:/etc/grafana/grafana.ini grafana.ini
// Update grafana config with SSL
vim grafana.ini
############################## Server
[server]
# Protocol (http, https, h2, socket)
protocol = https
…
# https certs & key file
cert_file = /etc/ssl/grafana-selfsigned.crt
cert_key = /etc/ssl/grafana-selfsigned.key
// Copy back the config to the container and restart
docker cp grafana.ini grafana:/etc/grafana/grafana.ini
docker container restart grafana
Now you can open in your browser to grafana “https://0.0.0.0:3000/ ” using admin/admin
You need to add a data source that is our influxdb container. So you need to pick up the “influxdb” type and fill the values as per below.
Now, you need to create a dashboard with panel.
Links that I reviewed for creating the dasbord
For creating a panel. The link below was the best on section “Interface Throughput”. Big thanks to the author.
BTW, you need to config SNMP in the switches so telegraf can poll it:
snmp-server location ceoslab
snmp-server community xxx123 ro
snmp-server host 172.17.0.1 version 2c xxx123
In my case, the stack of containers Influx-Telegraf-Grafana are running on the default bridge. Each container has its own IP but as the Arista containers are in the different docker network, it needs to “route” so the IP of telegraf container will be NAT-ed to 172.17.0.1 from the switches point of view.
Next
I would like to manage all this process via Ansible… Something like this.. but will take me time
In the past, I have had to use Centos systems a lot at work and there was something I really liked from rpm, it is “yum provides” that tells you which package you need to install based on the command you need.
I always struggle to do that in Debian. I hope I remember it for the next time. Based on this link:
# aptitude install apt-file
# apt-file update
# apt-file search snmpwalk
libnet-snmp-perl: /usr/share/doc/libnet-snmp-perl/examples/snmpwalk.pl
libsnmp-session-perl: /usr/share/doc/libsnmp-session-perl/examples/snmpwalkh.pl
python3-pysnmp4-apps: /usr/bin/pysnmpwalk
python3-pysnmp4-apps: /usr/share/man/man1/pysnmpwalk.1.gz
snmp: /usr/bin/snmpwalk <=== THIS IS WHAT I WANT !!!!
snmp: /usr/share/man/man1/snmpwalk.1.gz
snmpsim: /usr/share/doc/snmpsim/examples/data/foreignformats/linux.snmpwalk.gz
snmpsim: /usr/share/doc/snmpsim/examples/data/foreignformats/winxp1.snmpwalk.gz
# aptitude install snmp
Key presses for more visual people:
1- Enter Command Mode:
Escape
2- Move around to the start of the area to indent:
hjkl↑↓←→
3- Start a block:
v
4- Move around to the end of the area to indent:
hjkl↑↓←→
5- Type the number of indentation levels you want
0..9
6- Execute the indentation on the block:
>
I had already a key that I wanted to use. So adding it to the repo was ok.
Testing it was my challenge. I was missing two things. My key wasn’t following the standard file name so it wasn’t used by my ssh-agent and then, i wasn’t using the “git” user when testing…. I was using my github username.
$ ssh-keygen -t ed25519 -C "your@email.com"
Generating public/private ed25519 key pair.
Enter file in which to save the key (/home/USERNAME/.ssh/id_ed25519): /home/USERNAME/.ssh/id_ed25519.github
1- You need to increase the logging of your sshd (destination – server)
server# vim /etc/ssh/sshd_config
LogLevel VERBOSE
server# service sshd restart
server# tail -f /var/log/auth.log
2- From client, just ssh as usual to the server and check auth.log as per above
Jul 3 14:17:55 server sshd[8600]: Connection from IPV6 port 57628 on IPV6::453 port 64022
Jul 3 14:17:55 server sshd[8600]: Postponed publickey for client from IPv6 port 57628 ssh2 [preauth]
Jul 3 14:17:55 server sshd[8600]: Accepted publickey for client from IPv6 port 57628 ssh2: ED25519 SHA256:BtOAX9eVpFJJgJ5HzjKU8E973m+MX+3gDxsm7eT/iEQ
Jul 3 14:17:55 server sshd[8600]: pam_unix(sshd:session): session opened for user client by (uid=0)
Jul 3 14:17:55 server sshd[8600]: User child is on pid 8606
Jul 3 14:17:55 server sshd[8606]: Starting session: shell on pts/7 for client from IPv6 port 57628 id 0
3- So we have the fingertip of the key used by client. Now we need to get the fingertips of our clients keys to find the match:
Once you have installed the vagrant box (it takes a while) you can “vagrant halt” and start again:
~/storage/technology/linux/bpftracing master$ vagrant status
Current machine states:
default poweroff (virtualbox)
The VM is powered off. To restart the VM, simply run vagrant up
~/storage/technology/linux/bpftracing master$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider…
==> default: Checking if box 'ubuntu/bionic64' version '20200525.0.0' is up to date…
==> default: Clearing any previously set forwarded ports…
==> default: Clearing any previously set network interfaces…
==> default: Preparing network interfaces based on configuration…
default: Adapter 1: nat
==> default: Forwarding ports…
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations…
==> default: Booting VM…
==> default: Waiting for machine to boot. This may take a few minutes…
default: SSH address: 127.0.0.1:2222
default: SSH username: vagrant
default: SSH auth method: private key
==> default: Machine booted and ready!
==> default: Checking for guest additions in VM…
default: The guest additions on this VM do not match the installed version of
default: VirtualBox! In most cases this is fine, but in rare cases it can
default: prevent things such as shared folders from working properly. If you see
default: shared folder errors, please make sure the guest additions within the
default: virtual machine match the version of VirtualBox you have installed on
default: your host and reload your VM.
default:
default: Guest Additions Version: 5.2.34
default: VirtualBox Version: 6.1
==> default: Mounting shared folders…
default: /vagrant => /home/xxx/storage/technology/linux/bpftracing
==> default: Machine already provisioned. Run vagrant provision or use the --provision
==> default: flag to force provisioning. Provisioners marked to run always will still run.
~/storage/technology/linux/bpftracing master$ vagrant ssh
Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 4.15.0-106-generic x86_64)
Documentation: https://help.ubuntu.com
Management: https://landscape.canonical.com
Support: https://ubuntu.com/advantage
System information as of Sun Jun 21 19:25:26 UTC 2020
System load: 0.35 Processes: 99
Usage of /: 32.2% of 9.63GB Users logged in: 0
Memory usage: 12% IP address for enp0s3: 10.0.2.15
Swap usage: 0%
0 packages can be updated.
0 updates are security updates.
Last login: Sun Jun 21 19:22:37 2020 from 10.0.2.2
vagrant@ubuntu-bionic:~$
vagrant@ubuntu-bionic:~$ cd /vagrant/
vagrant@ubuntu-bionic:/vagrant$ ls
Makefile Vagrantfile bpf_program.o monitor-exec
README.md bpf_program.c loader.c ubuntu-bionic-18.04-cloudimg-console.log
vagrant@ubuntu-bionic:/vagrant$
You can find tools (under /usr/sbin)(already compiled and ready to use) or examples (under /usr/share/doc/bpfcc-tools/examples)
These are the tools you can find in the system:
vagrant@ubuntu-bionic:~$ ls -ltr /usr/sbin | grep -i bpfcc
-rwxr-xr-x 1 root root 3496 Nov 29 2017 reset-trace-bpfcc
-rwxr-xr-x 1 root root 7105 Nov 29 2017 deadlock_detector.c-bpfcc
-rwxr-xr-x 1 root root 9029 Mar 27 2018 zfsslower-bpfcc
-rwxr-xr-x 1 root root 5131 Mar 27 2018 zfsdist-bpfcc
-rwxr-xr-x 1 root root 8184 Mar 27 2018 xfsslower-bpfcc
-rwxr-xr-x 1 root root 4431 Mar 27 2018 xfsdist-bpfcc
-rwxr-xr-x 1 root root 6825 Mar 27 2018 wakeuptime-bpfcc
-rwxr-xr-x 1 root root 2636 Mar 27 2018 vfsstat-bpfcc
-rwxr-xr-x 1 root root 1177 Mar 27 2018 vfscount-bpfcc
-rwxr-xr-x 1 root root 2978 Mar 27 2018 ttysnoop-bpfcc
-rwxr-xr-x 1 root root 31977 Mar 27 2018 trace-bpfcc
-rwxr-xr-x 1 root root 4159 Mar 27 2018 tplist-bpfcc
-rwxr-xr-x 1 root root 17766 Mar 27 2018 tcptracer-bpfcc
-rwxr-xr-x 1 root root 9327 Mar 27 2018 tcptop-bpfcc
-rwxr-xr-x 1 root root 5631 Mar 27 2018 tcpretrans-bpfcc
-rwxr-xr-x 1 root root 11996 Mar 27 2018 tcplife-bpfcc
-rwxr-xr-x 1 root root 6858 Mar 27 2018 tcpconnlat-bpfcc
-rwxr-xr-x 1 root root 6963 Mar 27 2018 tcpconnect-bpfcc
-rwxr-xr-x 1 root root 5782 Mar 27 2018 tcpaccept-bpfcc
-rwxr-xr-x 1 root root 12809 Mar 27 2018 syscount-bpfcc
-rwxr-xr-x 1 root root 1231 Mar 27 2018 syncsnoop-bpfcc
-rwxr-xr-x 1 root root 4560 Mar 27 2018 statsnoop-bpfcc
-rwxr-xr-x 1 root root 15860 Mar 27 2018 stackcount-bpfcc
-rwxr-xr-x 1 root root 6244 Mar 27 2018 sslsniff-bpfcc
-rwxr-xr-x 1 root root 6277 Mar 27 2018 solisten-bpfcc
-rwxr-xr-x 1 root root 4048 Mar 27 2018 softirqs-bpfcc
-rwxr-xr-x 1 root root 3409 Mar 27 2018 slabratetop-bpfcc
-rwxr-xr-x 1 root root 5643 Mar 27 2018 runqlen-bpfcc
-rwxr-xr-x 1 root root 5998 Mar 27 2018 runqlat-bpfcc
-rwxr-xr-x 1 root root 58 Mar 27 2018 rubystat-bpfcc
-rwxr-xr-x 1 root root 60 Mar 27 2018 rubyobjnew-bpfcc
-rwxr-xr-x 1 root root 56 Mar 27 2018 rubygc-bpfcc
-rwxr-xr-x 1 root root 58 Mar 27 2018 rubyflow-bpfcc
-rwxr-xr-x 1 root root 59 Mar 27 2018 rubycalls-bpfcc
-rwxr-xr-x 1 root root 60 Mar 27 2018 pythonstat-bpfcc
-rwxr-xr-x 1 root root 58 Mar 27 2018 pythongc-bpfcc
-rwxr-xr-x 1 root root 60 Mar 27 2018 pythonflow-bpfcc
-rwxr-xr-x 1 root root 61 Mar 27 2018 pythoncalls-bpfcc
-rwxr-xr-x 1 root root 9831 Mar 27 2018 profile-bpfcc
-rwxr-xr-x 1 root root 1139 Mar 27 2018 pidpersec-bpfcc
-rwxr-xr-x 1 root root 57 Mar 27 2018 phpstat-bpfcc
-rwxr-xr-x 1 root root 57 Mar 27 2018 phpflow-bpfcc
-rwxr-xr-x 1 root root 58 Mar 27 2018 phpcalls-bpfcc
-rwxr-xr-x 1 root root 4858 Mar 27 2018 opensnoop-bpfcc
-rwxr-xr-x 1 root root 2337 Mar 27 2018 oomkill-bpfcc
-rwxr-xr-x 1 root root 11141 Mar 27 2018 offwaketime-bpfcc
-rwxr-xr-x 1 root root 10464 Mar 27 2018 offcputime-bpfcc
-rwxr-xr-x 1 root root 58 Mar 27 2018 nodestat-bpfcc
-rwxr-xr-x 1 root root 56 Mar 27 2018 nodegc-bpfcc
-rwxr-xr-x 1 root root 9289 Mar 27 2018 nfsslower-bpfcc
-rwxr-xr-x 1 root root 4587 Mar 27 2018 nfsdist-bpfcc
-rwxr-xr-x 1 root root 3221 Mar 27 2018 mysqld_qslower-bpfcc
-rwxr-xr-x 1 root root 12023 Mar 27 2018 mountsnoop-bpfcc
-rwxr-xr-x 1 root root 17963 Mar 27 2018 memleak-bpfcc
-rwxr-xr-x 1 root root 2262 Mar 27 2018 mdflush-bpfcc
-rwxr-xr-x 1 root root 3429 Mar 27 2018 llcstat-bpfcc
-rwxr-xr-x 1 root root 3295 Mar 27 2018 killsnoop-bpfcc
-rwxr-xr-x 1 root root 61 Mar 27 2018 javathreads-bpfcc
-rwxr-xr-x 1 root root 58 Mar 27 2018 javastat-bpfcc
-rwxr-xr-x 1 root root 60 Mar 27 2018 javaobjnew-bpfcc
-rwxr-xr-x 1 root root 56 Mar 27 2018 javagc-bpfcc
-rwxr-xr-x 1 root root 58 Mar 27 2018 javaflow-bpfcc
-rwxr-xr-x 1 root root 59 Mar 27 2018 javacalls-bpfcc
-rwxr-xr-x 1 root root 5154 Mar 27 2018 hardirqs-bpfcc
-rwxr-xr-x 1 root root 3852 Mar 27 2018 gethostlatency-bpfcc
-rwxr-xr-x 1 root root 7124 Mar 27 2018 funcslower-bpfcc
-rwxr-xr-x 1 root root 7442 Mar 27 2018 funclatency-bpfcc
-rwxr-xr-x 1 root root 12448 Mar 27 2018 funccount-bpfcc
-rwxr-xr-x 1 root root 5847 Mar 27 2018 filetop-bpfcc
-rwxr-xr-x 1 root root 7235 Mar 27 2018 fileslower-bpfcc
-rwxr-xr-x 1 root root 3718 Mar 27 2018 filelife-bpfcc
-rwxr-xr-x 1 root root 9605 Mar 27 2018 ext4slower-bpfcc
-rwxr-xr-x 1 root root 5674 Mar 27 2018 ext4dist-bpfcc
-rwxr-xr-x 1 root root 5944 Mar 27 2018 execsnoop-bpfcc
-rwxr-xr-x 1 root root 20036 Mar 27 2018 deadlock_detector-bpfcc
-rwxr-xr-x 1 root root 3920 Mar 27 2018 dcstat-bpfcc
-rwxr-xr-x 1 root root 4009 Mar 27 2018 dcsnoop-bpfcc
-rwxr-xr-x 1 root root 3780 Mar 27 2018 dbstat-bpfcc
-rwxr-xr-x 1 root root 7130 Mar 27 2018 dbslower-bpfcc
-rwxr-xr-x 1 root root 12614 Mar 27 2018 cpuunclaimed-bpfcc
-rwxr-xr-x 1 root root 4975 Mar 27 2018 cpudist-bpfcc
-rwxr-xr-x 1 root root 57 Mar 27 2018 cobjnew-bpfcc
-rwxr-xr-x 1 root root 4142 Mar 27 2018 capable-bpfcc
-rwxr-xr-x 1 root root 6960 Mar 27 2018 cachetop-bpfcc
-rwxr-xr-x 1 root root 4932 Mar 27 2018 cachestat-bpfcc
-rwxr-xr-x 1 root root 9887 Mar 27 2018 btrfsslower-bpfcc
-rwxr-xr-x 1 root root 6214 Mar 27 2018 btrfsdist-bpfcc
-rwxr-xr-x 1 root root 2392 Mar 27 2018 bpflist-bpfcc
-rwxr-xr-x 1 root root 1721 Mar 27 2018 bitesize-bpfcc
-rwxr-xr-x 1 root root 6171 Mar 27 2018 biotop-bpfcc
-rwxr-xr-x 1 root root 4869 Mar 27 2018 biosnoop-bpfcc
-rwxr-xr-x 1 root root 4023 Mar 27 2018 biolatency-bpfcc
-rwxr-xr-x 1 root root 1567 Mar 27 2018 bashreadline-bpfcc
-rwxr-xr-x 1 root root 33534 Mar 27 2018 argdist-bpfcc
vagrant@ubuntu-bionic:~$