What are you looking for?

Best Practices: Deploying Node.js Applications

Considerations regarding the performance, scale and availability of applications built using Nodes.js, sometimes fall by the wayside. Here are some rules we stick by to ensure that our applications consistently deliver.

At Mubaloo we have fully adopted Javascript into our development stack, ranging from slick, responsive wrapped web applications built using Angular.js or Backbone, all the way through to our best-in-class, bespoke API and CMS offerings, based around Node.js.

Using Javascript throughout our web operations is something that, until recently, wasn’t possible. Across the industry it’s very hard to find another stack like Javascript, especially one that has such a vibrant community of contributors and such a low barrier of entry for new programmers. What this basically means is we can write code for the “front end” of an application that looks very similar (sometimes it’s identical!) to the code we write on the “back end” of the application. Which, even five years ago, was generally two completely separate programming languages, each with their own methods and gotchas.

We love Node.js, and we’re very pleased to offer it to our clients as our solution of choice. However, due to Node being relatively new and sometimes being viewed as a shiny toy, I notice regularly that considerations regarding the performance, scale and availability of applications built using it, sometimes fall by the wayside.

Here’s some rules we stick by to ensure that our applications consistently deliver.

Please note: This article assumes an understanding of Node.js and developing web applications using it.

Don’t directly expose Node to web traffic

The problem:
It’s very easy to fall into this trap – You see a package like Express and think “Awesome! Let’s get started” – you code away and you’ve got an application that does what you want. This is excellent and, to be honest, you’ve won a lot of the battle. However, you will lose the war if you upload your app to a server and have it listen on your HTTP port, because you’ve forgotten a very crucial thing: Node is not a web server.

As soon as any volume of traffic starts to hit your application, you’ll notice that things start to go wrong: connections are dropped, assets stop being served or, at the very worst, your server crashes. What you’re doing is attempting to have Node deal with all of the complicated things that a proven web server does really well. Why reinvent the wheel?

The solution:
Use a real web server! I’ve used both Apache and nginx for real-world Node applications, with my preference being the latter. What they do really well is proxy connections to your application (or a group of clustered applications – more on this later), freeing up your Node application to do what it does best – speak to databases, handle your business logic and generate responses.

If you’d like to learn more about setting up a webserver to proxy to a Node instance, I’ve included the following tutorial links below:


Which leads me on to my next point:

Don’t use Node to serve static assets

The problem:
You set up a new Node app, then you set up a route or mount-point to serve your static assets like client JS, CSS, fonts or images.

This is not good, because again Node doesn’t do this efficiently. Say we ask for a reasonably large image file and we’re serving it from a Express mount-point, this works when we’re developing it locally, because it’s just us using it. No load really.

But let’s think of this from a flow perspective:

  • Request the file /catinabox.jpg
  • Express checks the file system to see if it exists
  • Express reads the file into memory using fs.readFileStream() or a similar variant
  • Express then has to figure out what type of file it is to set the appropriate response headers
  • It also needs to figure out what’s going on in terms of caching and tagging the response for the client
  • Express then writes the file back to the response stream

This is just for one request, for one image and bearing in mind this is memory that your application could be using for important stuff like reading a database or handling complicated logic; why would you cripple your application for the sake of convenience?

The solution(s):
There are actually a couple of ways to combat this:

  • Configure your web server to serve static assets itself. There are lots of guides for using pattern matching for these assets, alongside a Node upstream proxy
  • Don’t serve any assets locally, unless you’re in production. Use a content delivery network (CDN) to disperse these assets across the world and have the network decide where to serve them from. This leads to a snappy, responsive application and next to no load on your application server.

Don’t use ANY synchronous methods while serving requests

The problem:
If you come from a synchronous programming background its really easy to fall into the trap of using the *sync methods like fs.readFileSync() for convenience. Node.js should always be an asynchronous environment. Here’s why:

As soon as you start serving more than one request at a time (e.g. any website) your code has to wait for any synchronous requests to finish, before it’ll even listen to another connection request. This is a big problem. If, for example, you’re reading a file from the filesystem, no other request will be fulfilled until the file is read for the request that asked for it. Your beautiful, responsive application becomes a lumbering beast.

I’ve seen people fall into this trap and it’s so easy to fix!

The solution:
Just replace any *sync method with the asynchronous version. For example, fs.readFileSync(err, fileContent) becomes fs.readFile(function(err, fileContent) {});

Your application will be responsive even if it has to deal with a massive file! Excellent.

Don’t expect Node to recover after an error

The problem:
If like me, you came from a largely Apache/PHP based background, this one will seem familiar. You write a PHP app and upload it to the server first off, any changes are generally instantly visible. No configuration or starting and stopping services. Also, if there’s an error in a part of the application, just that part of the application will be broken.

Not so in Node.js – if something breaks – the whole service goes down!

But why?
Node doesn’t really react to errors very well – much like if you mess something up in your client Javascript page, nothing after the error gets executed. You just get an error in your console and a dent in your pride.

Node follows the same model, but in a server environment no one is there to refresh the page or respond to the issue. It just sits there, broken.

This is a huge problem for highly-available services like the ones we deploy. We can’t have that. Luckily there are some solutions. There are always solutions.

The solution(s):

  • Use a process supervisor like forever or supervisor. What these do is watch your Node application and, if they detect that an instance of the app is in an error state, they’ll restart it instantly, resulting in minimal downtime
  • Use Node’s built in cluster module to ‘spawn’ multiple versions of the same app that listen on the same socket. The ‘cluster master’ handles creating new instances (generally the rule of thumb is one instance for each physical CPU core on the server machine) and also takes care of restarting failed instances.

Read the Cluster documentation in the Node API for more information about how to get the latter solution to work: https://nodejs.org/api/cluster.html

Introduce instrumentation and logging straight away

The problem:
So we’ve installed a solution to keep our application up and serving requests – but how do we find out about errors? Why did it crash? How do we see what’s slowing things down?

The solution:
These questions can generally be answered by instrumentation and logging. In our applications we have an instrumentation class that times things to see where the bottlenecks are as well as providing in depth logging that can be accessed through our logs interface.

I cannot stress enough how important it is to bake this into your application right from the get go. With this in place, you can rest easy knowing what’s causing issues. When building a component, ask yourself: “Will I need to know about this later? Can I decipher the response as successful or not? Is this going to be a bottleneck?”. Implement instrumentation from the get go and you will know!

Alongside the reporting, we built a basic alerting system in our module that sends an email when certain constraints are met, or if there were requests that we deemed as a failure. It could easily be extended using Twilio to send text messages or even call us! I hope to open source this for your own applications in the near future.

We utilise our own solution alongside a service such as New Relic, which allows us to get instant alerts when things go wrong. It also allows for us to monitor our server performance which informs us as to whether portions of our application are taking up too much memory. I’d recommend signing up for a trial; there’s a Node plugin that’s very easy to install.

We’re also evaluating Strongloop at the moment which has its own monitoring solution, which we hope to be able to use in conjunction with our own software. It’s looking very promising, check it out!

Thanks for reading, hopefully you learnt something new today – Next time I’ll be talking about integrating these practices alongside a continuous deployment methodology. Follow me @tomhallam for more.

If you would like more information please get in touch alternatively:

Contact Mubaloo by phone +44 (0)203 327 8333 or email

  • Deloitte Tech Fast 50 winner 2014
  • Appsters winner for best use of API 2014
  • Ranked as the top app developer outside of the US by research firm Clutch
  • UXUK Winner 2014
  • footer-TRW
  • Mubaloo innovation lab
  • footer-Mubaloo

Company registration number: 0‌6770774.

Registered address: Mubaloo, 3 Grosvenor Gardens, London, SW1W 0BD