GCP – Basic Debian VM

Template for getting a basic Debian VM with python virtualenv and pyenv. Run python3 by default

$ sudo aptitude update
$ sudo aptitude dist-upgrade

$ sudo apt-get install -y python3-pip
$ sudo apt install git
$ sudo adduser --home /home/USER --shell /bin/bash USER
$ sudo usermod -a -G sudo USER

$ vim .bashrc

#
Python configuration
#
pyenv
https://github.com/yyuu/pyenv
git clone https://github.com/yyuu/pyenv.git ~/.pyenv
git clone https://github.com/yyuu/pyenv-virtualenvwrapper.git ~/.pyenv/plugins/pyenv-virtualenvwrapper
virtualenvwrapper
http://virtualenvwrapper.readthedocs.org/en/latest/
export PYENV_ROOT="$HOME/.pyenv"
export PATH="$PYENV_ROOT/bin:$PATH"
if [ ! -d "$PYENV_ROOT" ];then
git clone https://github.com/pyenv/pyenv.git ~/.pyenv
fi
if [ ! -d "$PYENV_ROOT/plugins/pyenv-virtualenvwrapper" ]; then
mkdir -p $PYENV_ROOT/plugins
git clone https://github.com/yyuu/pyenv-virtualenvwrapper.git $PYENV_ROOT/plugins/pyenv-virtualenvwrapper
fi
if type "pyenv" &> /dev/null; then
eval "$(pyenv init -)"
# TODO: make the prompt work for python and ruby
__pyversion (){
if type "python" > /dev/null; then
pyenv_python_version=$(pyenv version | sed -e 's/ .*//')
printf $pyenv_python_version
fi
}
if pyenv which pip &> /dev/null; then pyenv virtualenvwrapper fi export PS1="py:\$(__pyversion)|$PS1"
fi
export PROJECT_HOME=~/git
export PYTHONDONTWRITEBYTECODE=1
end python

$ bash

$ sudo apt-get install -y --no-install-recommends make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev libffi-dev liblzma-dev tcpdump tree

$ pyenv install 3.7.3
$ pyenv global 3.7.3

GCP Networking 101 – IP Forwarding

I had my shiny and tiny GCP network for EVE-NG to test vEOS. I built a new VM (vm2) to be my center for automation so I can test stuff like ansible/napalm/nornir etc… But I couldn’t ping from vm2 to the vEOS instances in eve-ng (vm1). Those instances where in a different network attached to vm1 so it had to “route”.

As usual, I missed one step when I created the EVE-NG VM. The official documentation doesnt mention anything regarding enabling routing in the VM. As I am not used to Cloud environments, I assume that any simple Linux VM can forward traffic if configured.

Surprise Surprise. In GCP (not sure in other cloud providers), you need to enable “forwarding” during the VM creation and you can’t change that afterwards in any way.

After checking the second guide I followed, I realised that guide mentioned the point to enable forwarding to avoid the same problem I was facing…

So I had to gave up and had to build both VMs from scratch….

But at the end, I have routing enabled in both VMs and I can ping to the vEOS images.

And another annoying thing. I couldnt update the next hop in a static route defined in the VPC. So I had to delete it and create again pointing to the new VM with the vEOS.

And dealing with the internal IPs…

Moving on, quite frustrating day. But learned several things about GCP netwoking.

Presigned URLs in S3

Image by ArtTower from Pixabay

S3 is the Amazon service to store files in the cloud. It is reliable, very reliable, the expected time to lost a single file from a group of 10 million of them is 10000 years. Even other services on Amazon uses internally S3 to store its files. On the bad side, as it is one of the first services that Amazon created, it can be a headache to fine grain permissions form all its capabilites and evolutions, making it difficult to be sure that a file is not accesible for those that should not be allowed.

In S3 you can define what they call a bucket, which is like a directory in a filesystem. The name of the bucket must be unique, not only in your account but in the global namespace from all AWS accounts in the world. That means you have to be creative when picking a bucket name.

A bucket can be private or publicly accessible. In the public side, one of the special uses is to serve static content from as a web server, even html pages from your custom domain. But what if you want to allow users to download files, for example an image, and you don’t want the user to be able to make it public sharing the link to the image?

I’ve played today with a very useful feature for that case. It allows to have a private bucket that can temporary allow the access to a single file to GET or even PUT/POST for a limited amount of time. You’ll need to use AWS SDK of your favourite supported programming language or AWS CLI from command line, to query AWS API for a temporary authorized url. Let’s see how with an example from scratch, installing and using AWS CLI in a Debian based environment.

Make sure you have access to an AWS account (you already have one if you have an amazon.com account) and generate a pair of AWS Access Key and AWS Secret Access Key from web console.

$> sudo apt instal awscli
$> aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: eu-west-1
Default output format [None]:

Create a local file called piticli with the content you prefer. Let’s create also a new S3 bucket using aws cli

# Create a convenience environment variable with a kind of random bucket name
$> BN="s3://thomarite-blog-test-$RANDOM"
# Let's actually create the bucket
$> aws s3 mb $BN 
make_bucket: thomarite-blog-test-1337
# Let's see it exists
$> aws s3 ls
2020-04-16 23:01:27 thomarite-blog-test-1337
# Now let's upload piticli into the new bucket
$> aws s3 cp piticli $BN
2020-04-17 23:01:45          26 piticli

Now let’s create a presigned url for piticli and store it in PRESIGNED_URL env var. As you can see, the temporary URL includes the bucket name, the file name and new AWS Access Key and signature, and a hint about the expiration date.

# Store the URL into a env var for future use
$> PRESIGNED_URL=$(aws s3 presign $BN/piticli)
$> echo $PRESIGNED_URL
https://s3.eu-west-1.amazonaws.com/thomarite-blog-test-1337/piticli?AWSAccessKeyId=AKIAYSFFLHZCQSEPMZEF&Signature=x%2BWzELvYpzdVipOd67ez0z3Esws%3D&Expires=1587077637

That’s the public url and will be valid for 1h by default. You can set the expiration time in aws s3 presign command using the parameter --expires-in and set the seconds allowed until it expires.

Now you have a public url accessible by any browser. Let’s open it via curl:

$> curl -Ls $PRESIGNED_URL
piticli is now… sleeping

And finally to clean things up let’s remove all the files and the bucket in AWS

$> aws s3 rb --force $BN
delete: s3://thomarite-blog-test-1337/piticli
remove_bucket: thomarite-blog-test-1337