A Checkered Past
When our company was founded – long before the advent of GitHub or other cloud-based social coding hubs – we used to deploy our new code to the website by copying files over the network. And in ten years or so, not much changed.
We ‘upgraded’ the process to allow us to upload more than one file at a time, but we still had to choose each file from a list of thousands, and you can bet occasionally we made mistakes. When we added more servers, we added functionality to upload to each in parallel, but we still found we sometimes missed files, or upload dependencies in the wrong order. This nagged at us as a team – but we focused on other – more important – issues, such as scaling and new features.
We accepted this as the status quo until a few years ago, when we read about how Etsy deploy using their Deployinator application. We realised then that we could make our deployment process better – way better. We too could have a ‘one click’ deploy! This is the short story of how we built an application to do it.
Getting our ducks in a row with Mercurial
The first thing we decided to do was move from SVN to Mercurial. This was something we had in the pipeline anyway as we wanted to leverge more of the DVCS features such as easy branching. Once we made that step, we created one golden rule: anything pushed to master could go live at any time.
We knew the next step: we would purge all the existing code on our servers; all the code that we had uploaded but had no easy way to delete; all the code uploaded to the incorrect location. We would then clone each web server from the clean main Mercurial repository. But before we did that, we had to have an automatic and fast way to deploy a given changeset to each server. So we took a step back to think about how we were going to build a web application to allow us ‘click to deploy’.
Since, as an organisation, we are moving from ColdFusion to JRuby, we decided to build a Sinatra web application using some custom Mercurial command line wrapper classes to serve as our deployment application. We used Twitter Bootstrap to make it look pretty. But we needed a name for the application, and thus Instaploy was born – a web application that deploys instantly(ish).
Instaploy works as follows:
- We break down our applications into ‘stacks’.
- Each stack works from a different Mercurial repository, and Instaploy has a set of local repositories cloned from master so it has the big picture.
- Each stack has different pre-deploy and post-deploy hooks, or ‘house-keeping tasks’ as we called them.
- When someone wants to deploy their code they:
- Push to master (default branch)
- Log into Instaploy
- They then see the last change deployed, and any pending depoyments; we just keep a list of deployments completed in a database table.
- They click to deploy their changeset and they see a summary of their changes (files changed etc.) and a button asking them if they want to ‘lock’ the stack.
- The application also determines from code paths in the changeset if there is any housekeeping tasks required post-deploy, such as application reloads for important cached data etc. These promises – to be fullfilled post-deploy – are displayed to the user.
- If they choose to proceed the stack is locked, thus preventing anyone else from deploying until they are done.
- During this period, they are able to upload any dependencies their code might require, such as database changes etc.
- Once they have satisified any dependencies they click the ‘Deploy Now’ button and after about 30 seconds of waiting, their changesets are ‘pushed’ to all production machines in the cluster.
Sounds simple, but it is complicated by the fact that we are working in a Windows environment. Usually a method to undertake the actual deployment on *nix systems is to use an automation tool to SSH onto each machine and pull the new code in from the master repoistory. This doesn’t work for us, so we needed an alternative. We ended up using PsExec – a Windows tool from the SysInternals suite – to execute a remote process on each server. This process simply executes a batch script which:
cmd.exeto get a terminal session
- Changes to the webroot
hg pull -r <changeset_id>to get the deployed change into the repository on the server.
hg up -r <changeset_id>to get the working copy reflecting the same change.
- Then executes any other required task on the remote server, such as touching deployment descriptors etc.
We’re still honing the process, but we are deploying to ColdFusion and JRuby stacks at the moment, each of which require different post-deploy tasks. Overall it’s a made a massive difference to how the team works, and deployment is no longer the error-prone chore it used to be.
It’s pretty much all good
- We know exactly what code is actually on the live servers, thus negating the problem of long-forgotten code that might be a security risk.
- Know exactly who deployed it, when, and what was in it by simply checking in Mercurial for the deployed changeset ID.
- We can rollback to any previous revision at the click of a button.
- We can deploy faster and more accurately than ever before – a real benefit when reacting to emergencies at 3am.
So a big thanks to Etsy for showing us the way, it just shows the amazing benefit of sharing ideas like this with the broader development community. Long may it continue.