Towards Continuous Integration
I've been using Ansible for deployment automation for a little while now and it's working really well for me. It doesn't take long for a Django website behind a load balancer to take a while to complicate which is where it's really helpful.
To deploy you'll often need to pull the code from the repository on at least two EC2 instances, run migrations, run npm run build
for webpack or whatever it is, collect static assets, clear the CDN cache and so on. With ansible playbooks you can build up a libray of reusable 'roles' and that preform these tasks and with a few variables you can get a long way towards fairly effortless deployment to a range of deployment as well as standardising the provisioning of new servers if needed.
So I wanted to combine this with some sort of Continuous Integration so I didn't have to worry about deploying at all - particularly to a non-mission critical staging server. Wouldn't it be nice if I could approve a pull request from someone in the team, and have it deployed to a staging server a few minutes later without ever having to touch it.
My crieria for the setup was:
- use Ansible for deployment (we're very dependent on it)
- no need to intervene at all which means no passwords required
- No passwords in the repo
I decided to go with Travis CI because of its seamless integration with GitHub. The Travis CI integration is super simple, you just need a .travis.yml
file in the root of your project and away you go. Mine looks something like this:
---
language: python
python:
- 3.6.4
services: postgresql
before_install:
- sudo apt-get install sshpass
install:
- pip install -r requirements.txt
addons:
ssh_known_hosts: xx.xxx.xx.xx
before_script:
- psql -c "CREATE DATABASE my_db;" -U postgres
script:
- pytest --ds=config.settings.travis
- flake8
after_success:
- bash deploy.sh
branches:
only:
- master
- development
env:
global:
- ANSIBLE_HOST_KEY_CHECKING=False
- secure: verysecurehash
notifications:
slack:
secure: verysecurehash
The first few lines are pretty obvious, we want a python environment, using python 3.6.4 with postgres.
We need to install sshpass so that we can run "ssh using the mode referred to as 'keyboard-interactive' password authentication, but in non-interactive mode." Essentially, we need to be able to provide a password without using the keyboard (zero interventions). I'll come back to this.
The ssh_known_hosts
is just a list of IP addresses we deploy to so we don't get the "The authenticity of host can't be established" notification and need to manually approve.
We create a database for the tests.
Then it's business time. We run the test suite with pytest
using a special config file that I have in the repository. This config file simply extends from the base.py settings file and adds a test database config:
DEBUG = True
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'my_db',
'USER': 'postgres',
'PASSWORD': '',
'HOST': 'localhost',
'PORT': '',
}
}
Travis then runs flake8
so that I know that any code I'm given to review is formatted correctly.
If either pytest or flake8 break, the build breaks and I get a notification on Slack. If they do pass then I've got a bash script - deploy.sh
- in the project root which is run next and looks like this:
#!/bin/bash
cd $TRAVIS_BUILD_DIR
cd ansible
if [ $TRAVIS_BRANCH == "development" ]; then
if [ $TRAVIS_PULL_REQUEST == false ]; then
ansible-playbook -i staging deploy-staging.yml --vault-id ../.vault_pass
fi
fi
if [ $TRAVIS_BRANCH == "master" ]; then
if [ $TRAVIS_PULL_REQUEST == false ]; then
ansible-playbook -i production deploy-production.yml --vault-id ../.vault_pass
fi
fi
Essentially all this does is run my Ansible Playbook, deploying to staging if the branch is "development" and production if the branch is "master".
This is where things start to get funky though. As you'll remember, my criteria is that there are no passwords in the repository and I need zero interventions.
The .vault_pass
file is in the project root and looks like this:
#!/usr/bin/env python
import os
print(os.environ['VAULT_PASSWORD'])
All it does is output the environment variable VAULT_PASSWORD
. This environment variable is set in the env
directive of the .travis.yml
file but is encrypted using Travis' Encryption Keys which is a command line tool (written in Ruby) that encrypts your data and provides you with a secure hash which I've represented above with 'verysecurehash'.
So now the build has the Ansible Vault password. Then in my group_vars
for each environment I have the ansible_become_pass
, ansible_ssh_user
and ansible_ssh_pass
variables set, with both passwords encrupted by Ansible Vault. This is also where sshpass comes in beacuse it allows Ansible to ssh into the server and deploy your code.
And there you have it, your code is fully tested, then deployed when a pull request is accepted and I don't have to worry about constantly keeping servers up to date.
It's not perfect
There are a few issue with this approach that I'm working though:
- You need to add all of Travis' IP addresses to your AWS Security Group. Not a big deal but a bit annoying.
- You need to have your Ansible Playbooks in your repository. It's not the end of the world but it might be a bit nicer with more of a separation of concerns.
- If the deployment fails, the build doesn't so you aren't notificied by Travis if, for example, your webpack build fails mid deployment.
- The setup assumes a static list of IP addresses to deploy to. That's fine for me but with a bigger setup and horozontal scaling it might become an issue.
- Javascript tests aren't run... yet
- The deploy.sh file is a little buggy, it wil deploy to both staging and production if I merge into master. Not a big deal, I just haven't got around to fixing it yet.