Delegating Nicely

Usually we use delegation to help separate responsibilities and avoid mashing all responsibility and logic into the the one object. This is a great goal and the first one SOLID lists. For those who might not have seen a concrete example of delegation, let me run through a recent example I worked on.

At Zendesk we have a public API client Ruby gem to talk to the Zendesk API. Since we are building out the Voice parts of this API, we had to add a few resources such as PhoneNumber to the Gem.

We decided it would be nice to add a voice qualifier or namespace to the gem so that we could group all the voice resources together. The main client class used method_missing to work out what resource class to create. In order to do this we had to find some way to allow the client class do this whether you called client.tickets.first or client.voice.phone_numbers.first.

The solution was to use the Ruby SimpleDelegator class to proxy all method calls to voice back onto self, something like this:


module ZendeskAPI
  class Delegator < SimpleDelegator; end

  class Client
    def voice
      Delegator.new(self)
    end
  end
end

That’s pretty much as simple a delegator as you can get, works well out of the box and got the job done very effectively. Hopefully this will give others some inspiration on how to start delegating nicely.

Posted in Uncategorized | Tagged , , | Leave a comment

Hacker News – In your Terminal

Over the Christmas period I found time to write a Ruby Gem called hacker_term which I’ve just published to RubyGems.

Image

It allows you to see a list of the front page HN stories, sort them by title, score and number of comments as well as select a particular story using the arrow keys and launch it using the default browser. 

It uses:

You can find out more about the code on the Github page

I’m sure this has been done to death before, but this incarnation was created to scratch a programming itch I had since reading about Etsy’s mctop gem -an example of Etsy once again providing inspiration. 

Posted in Uncategorized | Tagged , , , , | 3 Comments

Vagrant

A classic problem

In my latest job I’ve been tasked with developing processes and tools for a large-ish development team (30+). I knew within a week of starting that there was one major problem we had to solve quickly, and that was the ‘production mis-match’ issue.

Local development environments were in almost all cases different not only to test and production environments, but also different to each other. It wasn’t very pretty, but eminently solvable.

One to rule them all

We needed to ensure that all developers ran environments that were the same in all practicable ways to production. This meant:

  • The same version of CentOS
  • The same version of PHP
  • The same version of Apache
  • The same version of Memcached
  • The same version of MySQL
  • The same Apache modules
  • The same webroot
  • The same folder permissions
  • The same log directories
  • The same. The same. The same.

One issue we found with matching production was that often in production different functions reside on different machines, e.g. Memcached and MySQL were shared instances on different VMs from the Apache app servers. We accepted that and moved on; we couldn’t split those out for each developer.

Lets take some notes

First we had to audit the live servers, and even that exercise threw up some surprises. When you start asking around you realise that sometimes some VMs are different for all sorts of reasons. We took the software versions to be installed locally after agreeing with Ops how things should look on production.

Next we had to make some decisions on whether we should install mod_ssl on localhost etc. and go to the bother of organising certs. For our first iteration we decided against that.

Using Vagrant

So how do we ensure that every developer (often working on a Windows, Ubuntu or Mac machine) were able to have the same operating system? Step in Vagrant.

Vagrant – a VM configuration and managment system based on VirtualBox – is a way to encapsulate an environment, provision it with the correct versions of all the required software, and then distribute it.

  1. Create environment
  2. Provision
  3. Distribute
  4. Provision
  5. Distribute

Etc.

Provisioning can be done using Puppet or Chef; both are tools to manage what software and versions should be installed on a particular environment, based on a manifest or recipe.

Or if you prefer you can provision manually. This was actually something we had to deal with as there were custom builds for certain packages from a private repository. It also allowed us to avoid learning Puppet or Chef to begin with, at the obvious cost of having to manually tweak and redistribute builds manually. We are aware of the trade-off and we hope to bring in Puppet in phase two.

It’s important that you have someone from Ops/or a DevOps-type person to help provison the box. It can get tricky, and you need to get your hands dirty at times. Luckly we had Rafał.

Roll-out

We had new machines arriving for a good chunk of the developers, so we targeted that as the optimum time to roll out Vagrant.

It was a success – setup was reduced from days (literally!) to an hour or two. We hope to reduce that further by incorporatng feedback from the first rollout. We now have parity with production and we can scale this as the team grows.

Vagrant is a great project – and I think is a real ambassator for open source and how it can really help a business in a very practical way. It’s also worth nothing that Vagrant is worthwhile for any team size, or indeed a single developer working on many projects.

Next step: let’s get that Puppet server setup!

 

Posted in Uncategorized | Tagged , , , , | 2 Comments

Removing most – not all! – gems on Windows using Powershell

We’ve started using bundle package to package all our gems prior to deployment. But before I push our code up to take care of this I need to clean down all the globally installed gems on each production machine. But I wanted to keep the bundler gem, the rack gem and any torquebox gems.

There were a few ways to do this on *nix boxes but this great tutorial got me most of the way using Powershell. The only problem was it removed all gems. I wanted to keep some. So I adjusted that script to exclude any gems with the words “bundler” or “torquebox” in them.

So here it is for my own reference and because perhaps someone else will find it useful:

gem list | %{$_.split(' ')[0]} | where {($_ -notmatch "torque") -and ($_ -notmatch "bundler") -and ($_ -notmatch "rack")} | %{ gem uninstall -Iax $_ }

Simply adjust it as required and run it in a Powershell console.

Posted in Uncategorized | Tagged , , , , | Leave a comment

Using RSpec Mocks outside of tests

When I started mocking objects using RSpec Mocks, I never thought I’d end up using them outside of actual unit testing.

But this week, while working on a new internal gem to build out how we register documents, I found myself wanting to get some early acceptance testing from the end users. To do this I build a throw-away Sinatra application that took some user input. I then used this input the mock the return values for one of the methods in my actual class instance. This allowed my users to test all sorts of different scenarios, even ones I may have missed in my unit tests.

To use this yourself, you just need to include the following line to use the RSpec Mock standalone in your application:

require 'rspec/mocks/standalone'

Then just use mock your instance as before:


#stub method . . .
obj = MyObject.new
obj.stub(:method_name => params[:key]) #user-entered values returned

#. . . and call
obj.method_name

You can find more details here.

Posted in Uncategorized | Tagged , , , , | Leave a comment

How Etsy Helped us Deploy

A Checkered Past

When our company was founded – long before the advent of GitHub or other cloud-based social coding hubs – we used to deploy our new code to the website by copying files over the network. And in ten years or so, not much changed.

We ‘upgraded’ the process to allow us to upload more than one file at a time, but we still had to choose each file from a list of thousands, and you can bet occasionally we made mistakes. When we added more servers, we added functionality to upload to each in parallel, but we still found we sometimes missed files, or upload dependencies in the wrong order. This nagged at us as a team – but we focused on other – more important – issues, such as scaling and new features.

We accepted this as the status quo until a few years ago, when we read about how Etsy deploy using their Deployinator application. We realised then that we could make our deployment process better – way better. We too could have a ‘one click’ deploy! This is the short story of how we built an application to do it.

Getting our ducks in a row with Mercurial

The first thing we decided to do was move from SVN to Mercurial. This was something we had in the pipeline anyway as we wanted to leverge more of the DVCS features such as easy branching. Once we made that step, we created one golden rule: anything pushed to master could go live at any time.

We knew the next step: we would purge all the existing code on our servers; all the code that we had uploaded but had no easy way to delete; all the code uploaded to the incorrect location. We would then clone each web server from the clean main Mercurial repository. But before we did that, we had to have an automatic and fast way to deploy a given changeset to each server. So we took a step back to think about how we were going to build a web application to allow us ‘click to deploy’.

Instaploy

Since, as an organisation, we are moving from ColdFusion to JRuby, we decided to build a Sinatra web application using some custom Mercurial command line wrapper classes to serve as our deployment application. We used Twitter Bootstrap to make it look pretty. But we needed a name for the application, and thus Instaploy was born – a web application that deploys instantly(ish).

Instaploy works as follows:

  • We break down our applications into ‘stacks’.
  • Each stack works from a different Mercurial repository, and Instaploy has a set of local repositories cloned from master so it has the big picture.
  • Each stack has different pre-deploy and post-deploy hooks, or ‘house-keeping tasks’ as we called them.
  • When someone wants to deploy their code they:
    • Push to master (default branch)
    • Log into Instaploy
    • They then see the last change deployed, and any pending depoyments; we just keep a list of deployments completed in a database table.
    • They click to deploy their changeset and they see a summary of their changes (files changed etc.) and a button asking them if they want to ‘lock’ the stack.
    • The application also determines from code paths in the changeset if there is any housekeeping tasks required post-deploy, such as application reloads for important cached data etc. These promises – to be fullfilled post-deploy – are displayed to the user.
    • If they choose to proceed the stack is locked, thus preventing anyone else from deploying until they are done.
    • During this period, they are able to upload any dependencies their code might require, such as database changes etc.
    • Once they have satisified any dependencies they click the ‘Deploy Now’ button and after about 30 seconds of waiting, their changesets are ‘pushed’ to all production machines in the cluster.

PsExec

Sounds simple, but it is complicated by the fact that we are working in a Windows environment. Usually a method to undertake the actual deployment on *nix systems is to use an automation tool to SSH onto each machine and pull the new code in from the master repoistory. This doesn’t work for us, so we needed an alternative. We ended up using PsExec – a Windows tool from the SysInternals suite – to execute a remote process on each server. This process simply executes a batch script which:

  • Starts cmd.exe to get a terminal session
  • Changes to the webroot
  • Runs hg pull -r <changeset_id> to get the deployed change into the repository on the server.
  • Runs hg up -r <changeset_id> to get the working copy reflecting the same change.
  • Then executes any other required task on the remote server, such as touching deployment descriptors etc.

We’re still honing the process, but we are deploying to ColdFusion and JRuby stacks at the moment, each of which require different post-deploy tasks. Overall it’s a made a massive difference to how the team works, and deployment is no longer the error-prone chore it used to be.

It’s pretty much all good

  • We know exactly what code is actually on the live servers, thus negating the problem of long-forgotten code that might be a security risk.
  • Know exactly who deployed it, when, and what was in it by simply checking in Mercurial for the deployed changeset ID.
  • We can rollback to any previous revision at the click of a button.
  • We can deploy faster and more accurately than ever before – a real benefit when reacting to emergencies at 3am.

So a big thanks to Etsy for showing us the way, it just shows the amazing benefit of sharing ideas like this with the broader development community. Long may it continue.

Posted in Uncategorized | Tagged , , , | 4 Comments

Consider Mercurial (Yes, Mercurial, not Git)

Rationale

Mercurial will be the 3rd version control system I’ve used since I started programming professionally. The first was SourceSafe. The second was Subversion. And now we’ve migrated all our source code to Mercurial.

Often I’ll be asked: why did you not pick Git?

That’s a fair question. Everyone seems to loves Git these days. But I think more precisely, everyone loves GitHub. I love GitHub too, I use it for all my personal projects, but in terms of how I manage code for my employeer, things are a little different. I can’t justify moving 15 year’s worth of intellectual property to GitHub just yet. Having our entire code base in the cloud is a hard a sell to management, and I have plenty of low-hanging fish to fry (mixed methaphors intended) without putting energy into that. Of course, we could have taken a look at GitHub Enterprise, but there was no budget for this move.
But there is another reason why we can’t use Git effectively: Windows. We are using Windows 7 for day-to-day work and that is something that is unlikely to change in the medium term. (I actually don’t mind Windows 7. I own a Mac and I use it for my personal projects; I’ve dabbled with Unbuntu. But getting my team kitted out with *nix systems is another challenge I’ll put aside for now.)
The current status quo as regards using Git with Windows is – unsuprisingly – Git for Windows which provides a *nix style shell for working with Git from the command line. This isn’t bad, but I dislike the idea of adopting a new technology on a platform that isn’t even in the thoughts of the tool maintainers.

But we still wanted distributed version control. We were stuck with Windows and we didn’t have the lure of GitHub to look forward to. We still wanted that ‘branch-y’ type of development. For these reasons, we found ourselves looking ever closer at Mercurial.

Mercurial Works

Mercurial just works on Windows. The definitive Mercurial book (written by Bryan O’Sullivan – a fellow Irishman) is concise, and exhaustive. Better yet, if you want that warm fuzzy feeling a GUI gives you, then look no further than TortoiseHg – a top quality tool. That said, we trained the team up using Mercurial on the command line. I wanted everyone to understand how Mercurial worked, and how it was different from SVN. I am really happy we took this approach – I really wanted the team to understand their new tool.

Lessons

Our Mercurial move had a steep learning curve, especially for those developers that had no exposure to Git or any other DVCS. We had to plan our move carefully to avoid a change-over meltdown, even for our small team of seven. There are broad similarities between Git and Hg, but the syntax can be confusingly different, especially when the SVN mental model is still stuck in place.
The first day saw a lot of abandoned branches, incorrect merges, and a general realisation that understanding the tools you use everyday is going to be paramount in this brave new world.
It also quickly became apparent that the ‘branch-y’ model we were adopting could get very complicated when you are working once something more complex than a micro-blog. Branches needed to be shared, and they need to be merged. Working copies on developer and test machines had to represent particular branches and this all has to be managed. It think it is fair to say that some amount of complexity has been created, and this is the cost of the amazing flexibilty we now have.

Our Workflow

Here are some bullets points describing our Mercurial workflow:

  • We work from a central repo called ‘master’. All dev repos are cloned from this.
  • Developers create named feature branches (not bookmarks) for everything except the simplest changes. These branch from the default branch.
  • Developers prefix named branches with their initials, and then close all branches once they’re finished with them to avoid cluttering up the global branch list.
  • Where shared work is required, a developer can push branches they wish to share to ‘master’, and other devs can then pull those branches.
  • We have many local websites. We generally have a Site X cloned from ‘master’ for general dev. We then have Sites A to C – each of which is cloned from Site X – for branch testing. Using this approach, a developer can create a feature branch on Site X, make their changes and commit it. Then they can pull the changes from Site X _into_ Site A and update the working copy to that branch. They then tell the user to test away on Site A while they can get back to Site X and branch their next feature. And on and on.
  • Once testing is complete, the developer can create a code review from the diff between the feature branch and the default branch (having of course merged default into the feature branch first). Once complete, we can simply merge our feature branch into default and push to ‘master’ so everyone else gets our tested and reviewed changes.

hg ci -m “Wrapping up.”

So that is how we use Mercurial in a nutshell. It is complex; way more complex than SVN. You need to know how Mercurial actually works to get the best out of it, whereas perviously with SVN you could get away with not really engaging fully with SVN due to how simple it was.

But Mercurial is very flexible. And while not bullet-proof, with a decent dollop of inter-developer communication we’re making it work well for us. So if you need a DVCS, and especially if you are on Windows – consider Mercurial.

Posted in Uncategorized | Tagged , , , , , | 10 Comments