Quantcast
Channel: tools | Laurence Gellert's Blog
Viewing all articles
Browse latest Browse all 10

Backup to AWS S3 with multi-factor delete protection and encryption

$
0
0

Having secure automated backups means a lot to me. This blog post outlines a way to create encrypted backups and push them into an AWS S3 bucket protected by MFA and versioning, all with one command.

There are four parts to getting the backup big picture right:

Step 1 – secure your data at rest

If you don’t secure your data at rest, all it takes is physical access to get into everything. The first thing I do with a new machine is turn on whatever built in encryption is available. MacOS has FileVault. Ubuntu offers disk encryption plus home folder encryption. With the speed advantages of SSDs I don’t notice a performance penalty with encryption turned on.

Step 2 – backup to an external hard drive

On Mac I use Time Machine with an encrypted external HDD as a local physical backup.

Step 3 – push to the cloud

In case of theft, my backups are encrypted and then pushed to the cloud. There are lots of cloud backup providers out there, but AWS S3 is an economical option if you want to do it yourself. The setup is much easier than it used to be!

As of November 2016, S3 storage costs about $0.03 / GB for standard storage, but you can get it down even more by using Infrequence Access Storage or Glacier storage. Since I’m not backing up that much, maybe 20GB, it is a significant savings over services that charge $10/month. Most importantly I have more control over my data, which is priceless.

Step 4 – verify your backups

Periodically, restore your backups from Step 2 and 3. Without verifying your backups, you have no backups you just think you do.

Overview for getting AWS S3 going:

  1. Enable AWS MFA (multi-factor authentication) on the AWS root account.
  2. Create a bucket that is versioned, private, encrypted, and has a 1 year retention policy.
  3. Setup a sync job user in AWS IAM (identity and access management).
  4. Install AWS CLI tools, authenticate as the user created in step 3, and start syncing data!
  5. At this point you can sync your files unencrypted and rely on S3’s encryption (which requires some extra configuration of the IAM user and s3 sync command), or you can make tar files and encrypt those with something like gpg (linux, Mac) and then push those to the cloud. This article explains the latter.

With this setup, if files were deleted maliciously from S3, the root account can go into the bucket’s version history and restore them. That in turn requires the password PLUS the authenticator device, making it that much more secure.

ALERT / DANGER / DISCLAIMER: I’m not a security expert, I’m an applications developer. If you follow this guide, you are doing so at your own risk! I take zero responsibility for damages, lost data, security breaches, acts of god, and whatever else goes wrong on your end. Furthermore, I disclaim all types of warranties from the information provided in this blog post!

Detailed AWS Bucket Setup with Sample Commands:

1) Enable AWS MFA

AWS root logins can be setup with multi-factor authentication. When enabled, you login with your password plus an authenticator code from a physical device (like your phone). The app is free. In theory this protects from malicious destruction of your data in case your root password is compromised. The downside is, if you loose the authenticator device, you have to email AWS to prove who you are to get it re-issued. ENABLE MFA AT YOUR OWN DISCRETION. Make sure to read the docs and understand how MFA works.

To enable MFA, login to your root AWS account, click on the account menu in the upper right and go to ‘Security Credentials’. You’ll need to install an authenticator app on your smart phone and scan a QR code. The instructions will walk you through it.

AWS MFA

 

2) Create a new S3 bucket

Navigate to the ‘S3’ module, and create a new bucket.

  1. Pick a region that is on a different continent for maximum disaster recovery potential. I went with Ireland for price and geographic reasons, there are many options to choose from.
  2. Under Properties, Versioning, enable versioning on the bucket.AWS S3 Bucket Enable Versioning
  3. If you want to automatically clean up old deleted files, you might setup a rule under Lifecycle, Add Rule, Whole Bucket, Action on previous version:
    1. Check Permanently Delete – 365 days after becoming a previous version
    2. Check “remove expired object delete marker”

    AWS S3 Bucket Lifecycle

3) Create a sync job user.

A new user with a very narrow permission set will be used to backup your data into the bucket you just created. The sync user is only able to read/write to the S3 bucket and nothing else. Importantly, the sync user is not allowed to delete buckets!

Under AWS’s Identity and Access Management (IAM), add a new user ‘sync-user’. This user does have delete access, but the bucket is versioned so the data is still there just flagged as deleted.  Make sure to save the access key and secret key it generates somewhere safe like your KeePass file.

Give the new user the following custom inline policy. Click on the newly created user, go to the Permissions tab, expand Inline Policies, click Create User Policy, select Custom Policy. Name it something like ‘backup-policy’.

aws iam user

AWS IAM add inline policy

AWS IAM custom policy

AWS IAM inline policy document

For the Policy Document, copy the following verbatim, except the bucket name. Replace BUCKET_NAME_HERE, which appears in two places, with the name of your bucket.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::BUCKET_NAME_HERE"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::BUCKET_NAME_HERE/*"
            ]
        }
    ]
}

4) Get AWS CLI setup and start syncing!

Amazon provides a set of command line tools called AWS CLI. One of the commands is s3 sync which syncs files to an S3 bucket, including sub directories. No more writing your own script or using a 3rd party library (finally).

  1. To install AWS CLI on Mac:
    sudo pip install awscli --ignore-installed six

    See the official AWS CLI install instructions page for more details.

  2. Configure AWS CLI:In a terminal run the following:
    $ aws configure

    Paste in the keys from the user created in step 3 above.

To sync a folder to the bucket, run the following command in the terminal. Replace BUCKET_NAME_HERE with your bucket’s actual name.

$ aws s3 sync ~/folder s3://BUCKET_NAME_HERE/folder --delete

Read more on the aws s3 sync command here.

5) A sample script that tars everything, encrypts, then copies to S3:

To get this working, you’ll need to install gpg and setup a public/private key with a very strong password. Make sure to backup that key. I have an article on GPG here.

#!/bin/bash
date=`date +%Y-%m-%d`
echo -------------  Backup starting at $(date): -------------------


echo "Copying files from my home folder, Desktop, etc into ~/Documents/ so they are part of the backup "
cp -r ~/.ssh ~/Documents/zzz_backup/home/ssh
cp -r ~/Desktop/ ~/Documents/zzz_backup/Desktop
cp ~/.bash_profile ~/Documents/zzz_backup/home/bash_profile
cp ~/.bash_profile ~/Documents/zzz_backup/home/bash_profile
cp ~/.bash_prompt ~/Documents/zzz_backup/home/bash_prompt
cp ~/.path ~/Documents/zzz_backup/home/path
cp ~/.gitconfig ~/Documents/zzz_backup/home/gitconfig
cp ~/.my.cnf ~/Documents/zzz_backup/home/my.cnf
cp ~/backups/make_backup.sh ~/Documents/zzz_backup/home/

echo "Clearing the way in case the backup already ran today"
rm ~/backups/*$date*


echo "Making archives by folder"
cd ~/

echo "    Documents..."
tar -zcvf ~/backups/$date-documents.tar.gz ~/Documents
# .. you may want to backup other files here, like your email, project files, etc


echo "GPGing the tar.gz files"
cd ~/backups
gpg --recipient {put your key name here} --encrypt $date-documents.tar.gz
# ... again, add more files here as needed

# NOTE: to decrypt run gpg --output filename.tar.gz --decrypt encryptedfile.tar.gz.gpg

echo "Removing the tar.gz files"
rm $date*.tar.gz

echo "Syncing to S3"
/usr/local/bin/aws s3 sync ~/backups s3://my-backups/backups --delete

echo Done!

The post Backup to AWS S3 with multi-factor delete protection and encryption first appeared on Laurence Gellert's Blog.


Viewing all articles
Browse latest Browse all 10

Trending Articles