header-logo

LookFar Labs29 October 2015

Using Docker-Compose Auto-Scaling to Scale Node.js Instances on a Single Machine

How Startups Can Stay Vertical

We work with a lot of startups at LookFar, and though they have varying concerns, almost all of them have a point in common: they may need to scale very quickly and very suddenly. Docker – more specifically Docker-Compose – provides a solution with auto-scaling groups.

> docker-compose scale myservice=10

Running on a production instance of Odoo — it’s Python, but the principle is the same

This would spool up 10 instances of myservice. Awesome! They’ll maintain the links to other instances, and docker-compose handles the naming. It’s always easier to scale vertically before scaling horizontally, and given node.js’s single-core nature, growing vertically like this will let us scale a node.js-driven service much higher on a single machine very easily; plus, we won’t be introducing a service-discovery layer, which would make the infrastructure complexity skyrocket.

We frequently see teams designing for horizontal growth while still at a very early stage. There’s some value to taking this route, but it’s almost always more cost-effective to scale vertically until you run out of system resources. The drop in infrastructure complexity allows businesses to throw the man hours they would have put into infrastructure maintenance into delivering more features and selling more product.

Services like Zookeeper are critical to scaling, but scaling too early leads to a ton of premature-overhead

It’s best to stay on a single machine (or at least a single machine per-service if a team is offloading database hosting or CPU intensive tasks to another box) for as long as possible. EC2 offers a single instance of up to 40 vCPUs with 160GB of memory for less than $2k/mo on-demand.Sticking all the workload on one machine removes the need to think about service discovery layers, auto-provisioning, and auto-scaling, giving startups more time to devote to the many, many pressing tasks faced by growing businesses.

There’s only one problem to our docker-compose vertical-scaling solution: we want Nginx to use all of these instances as http upstreams. In order to do that, we’ll have to rewrite the nginx.conf file after we auto-scale. The easiest way to do this is to have the docker entry-point run a script that detects the running linked web service instances and injects their IPs into the conf file.

> docker-compose scale webservice=10

> docker-compose rm nginx

> docker-compose up

Just like this

Enter Bash!

The solution: spool up the service instances, use a bash init script inside our Nginx instance to detect the pattern that docker-compose uses to name instances, then inject the ports and IPs into the nginx.conf file before nginx starts.

Docker-Compose creates services with the format {Host Machine Name}_{Service Name}_{Instance Number}, ex. ecs14356_nodeservice_3, would be the third instance of Node Service running on the VM with hostname ecs14356. In order to get this parsing right, we do have to pass the host machine’s hostname into the nginx instance via an environment variable (HOSTHOST in our case) from the docker-compose file.

And there you have it! To see the process start-to-finish watch this console demo, or just see the three files in all their glory on Github.

Written by

Sean Moore

Am I Spending Too Much Maintaining Software and Should I Be Exploring Alternative Solutions? Am I Spending Too Much Maintaining Software and Should I Be Exploring Alternative Solutions? 7 Steps to Prepare Your Company for Machine Learning 7 Steps to Prepare Your Company for Machine Learning