Friday, 16 October 2015

Microscaling-in-a-Box

We’ve just launched our Microscaling-in-a-Box tool and open sourced the code.

You can try it out at https://app.force12.io, just log in and run a few quick Docker commands. It should take less than 5 minutes if you already have Docker installed.

What is Microscaling?

Microscaling is what we call scaling containers in real time in response to current demand. We use the term to differentiate it from traditional auto scaling where you are adding or removing capacity using Virtual Machines.

Real time scaling is possible because containers can be started in seconds or sub-seconds, whereas starting a Virtual Machine and joining it to a cluster takes minutes. This makes traditional auto scaling difficult and means it requires workarounds like scaling up quickly and scaling down slowly.

Microscaling-in-a-Box architecture


Our demo lets you experiment with microscaling using Docker on your local machine. We use the Docker Remote API as a simplistic single node scheduler.

The demo has 3 types of containers.

  • Force12 Agent - drives the scheduler, creates simulated randomized demand and reports the status of the demo to our API.
  • Priority 1 - a demonstration high priority app (e.g. a customer facing API).
  • Priority 2 - a demonstration low priority app (e.g. a worker process that can be interrupted).

In practice, for demo purposes the Priority 1 & 2 tasks both simply sleep.

Our tool lets you control the demo by configuring both the number of containers and parameters for the random demand.

Microscaling micro-services

In a real-world implementation there will be more than 2 types of container, each with its own relative priority. Each container type will be linked to real metrics, such as requests per second for a load balancer, or the length of a message queue.

Microscaling works well with the micro services approach. With multiple services some perform higher priority and more time critical tasks. Different services also get busy at different times depending on the business task they are performing.

At Force12 we’re focused on container scaling rather than scheduling containers. However containers need to be run on a cluster. This means a container scheduler is needed to provide functionality like fault tolerance by distributing containers across hosts.

There are many container schedulers out there and more being developed. So we’ve built demos of microscaling that integrate with the EC2 Container Service scheduler and the Marathon scheduler for Apache Mesos. We plan to support more such as Kubernetes, and Nomad which was recently released by Hashicorp.

You can read more about microscaling and our Marathon integration in this interview we’ve done with Daniel Bryant from InfoQ.

Now here are some of the technical details for the demo.

Force12 Agent

Our Force12 Agent is written in Go and is packaged as a Docker image. We’re using Alpine Linux as its base image. This is a minimal distribution of Linux that is only 5 MB in size. Our Go client is compiled as static binary and added to the image. This approach means our image is only 28 MB in size.

We really like the Alpine approach of a minimal Linux distribution with a good package manager to install extra packages as required. This is far better than using images based on traditional Linux distributions like Debian or CentOS. This generates huge images that are 600 MB or more and most of that code is never used.

The Docker Remote API we’re using to provide basic scheduling is the same API used by the Docker client. We access it by mounting the Docker socket running on the host within the Force12 container. This means that the demo app containers created are siblings of the Force12 container.

Originally we implemented the Demo using DIND (Docker in Docker) and using Docker Compose to link the DIND container to our Force12 container.  In this setup our demo app containers were children of the DIND container. This worked but DIND is mainly designed for testing Docker itself. So using it in this way isn’t recommended and can lead to data corruption.

See this blog post from Jérôme Petazzoni on why not to use DIND.

Priority 1 & Priority 2 containers

The demo containers are also based on the Alpine Linux image. They simply run a bash script with an infinite loop. This means the containers continue to run until they are stopped by the Force12 agent.

app.force12.io

 Server-side we built a Ruby on Rails application that handles receiving data from the Force12 agent, displaying the demo visualization and user login / signup.

Again we’re using Alpine Linux to keep our images small and our Rails app image is under 300 MB. This is pretty good as the image needs to include the Ruby interpreter and all the Rails gems. This blog post on minimal Ruby images with Alpine was very helpful in setting this up.

What’s Next

Now that we’ve open sourced our Force12 Agent we’ve got follow up releases planned with the integrations we’ve already done for demos with the EC2 Container Service and Marathon schedulers. There are also a lot more schedulers we’re planning to integrate with: Kubernetes, Nomad ….

If you follow us on Twitter (we’re @force12io) you’ll hear as soon as they are available. Or you can influence our roadmap by telling us which integrations you’d like us to prioritize next!