This post was written by Tom Hallam, one of Mubaloo’s web developers. Image credit: Docker.
As part of the web service team here at Mubaloo, we are finding the concept of “containerising” web sites and services is getting a lot of attention lately. So we thought we’d explain what this is all about and how we’re using it to make our internal and client-based projects more secure and portable.
We’ll start with the concept of virtualisation. If we look back at the typical IT infrastructure five years ago, most servers that hosted any sort of app or website, were most likely a physical dedicated server in a datacenter somewhere, typically within an organisation. They were costly and companies were stuck with the operating system the provider had chosen to install on the machine.
Nowadays, most providers have a number of high-powered remote servers that are designed to run virtual servers within it (otherwise known as the cloud). These “container” servers run software called “Hypervisors” which essentially tricks the “guests” within it to think its running on its own server, when in reality it’s just taking up a portion of the hypervisor’s resources.
This is a massive advantage to developers and other stakeholders, because the virtualised machines are generally far cheaper than a whole server, can usually be scaled to meet demand relatively easily and can run any number of “guest” operating systems. These systems include Ubuntu, CentOS or Windows Server, to name but a few. This is what we mean by “server virtualisation”.
But virtualisation isn’t always enough. In an agency environment we have several backend systems for many different clients. There are a few things that we need to be absolutely sure of:
- These back-end systems cannot interact with each other in any way (unless of course there is a valid business case for such interaction)
- These back-end systems must be secure and expose a minimal attack surface area to the outside world
- These back-end systems should be repeatable. What I mean by this is that once we define how an app should work and the services it requires (databases, push notification servers, caches), we need to be able to repeat this entire set up across different servers or development machines. By doing so, this means we don’t end up with situations where one thing works on one machine and another thing doesn’t
- These back-end systems should only go live once thoroughly tested. This one is obvious really. We need to be completely satisfied that our tests for the app code pass before the code goes into a production environment
We can satisfy those requirements with App Containerisation. In the last year the technique has really taken off and become a real talking point within the industry.
Here at Mubaloo we’ve created a container hosting system based on the popular container system Docker. We use another open source project called Dokku to manage the provisioning of apps from within our development toolkit. Dokku uses small text files called Buildpacks to figure out what kind of app is being provisioned and package it into a format that can be run easily within a container.
Our process is this:
- Reach a point within the development process where we’re happy to deploy the app
- Ensure our tests cover the app
- Tag our build and move the code into the production branch
- Push our code to the server using “git push”
- The container server receives the code and figures out what we’ve just deployed
- The container server installs all of our dependencies automatically
- Then it runs our tests. If they fail the code will never make it into production
- A brand new, totally self-contained app container is created, a proxy web server is pointed at the container, and we’re told what URL we can access it from.
The main benefits of taking this approach is that, instead of having to upload our files through FTP (and risk overwriting something someone else did), we have a completely versioned copy of the app code. This will then warn us if we’re about to do something like overwrite or change functionality without realising.
The other advantage is that, due to using Docker, the app environment is completely self-contained. It exposes only one port, which is the web server port, therefore stopping attacks from people who have found a vulnerability with an exposed port. In addition, the “image” that is created can be downloaded and used anywhere.
If we had a developer or someone that wanted to test our code locally, and had no idea how to setup and configure a web server, we could easily provide this image to them and the system (along with a package called Vagrant) will set itself up for them. This is particularly useful when working with clients’ internal development teams.
We also gain strength in that, if configured correctly, this system could be repeated multiple times to create ten copies of the same application container. This is advantageous because this could then be “load balanced” across multiple datacenters, perhaps even in completely different countries, to ensure responsiveness and reliability.
The theory behind load balancing is having a centralised “proxy” server that takes a request for a resource and decides which of the application copies to send the request to. This means that one container won’t get overloaded with requests if the service gets busy.
Thanks for reading; hopefully you learnt something about what containerisation is and why it’s massively beneficial for teams such as ours. If you’d like to discuss this article at all, please don’t hesitate to contact me through Mubaloo or using Twitter @tomhallam.