(Originally written in 2015)
It’s 2015, and if you haven’t Docker-ized your entire infrastructure by now, you’re doing it wrong! lol. (That is seriously a joke.)
Well, believe it or not, Docker is still really new, and not a lot of people have a good understanding of how it works. There can be a lot of moving parts and a small learning curve depending on your engineering and operational experience.
Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries — anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.
The Docker Ecosystem
There are a three major parts to the docker ecosystem:
- The docker host, which builds and runs containers
- The docker client, which instructs the server what to do (execute, run, daemonize, etc.)
- The docker registry, for storing your built containers
How do they work?
In the simplest summary possible, your docker client will instruct the docker host to Build a docker container to the specifications provided within a Dockerfile. Once this container, also called an image, is built, you can run, execute or push this container to a registry.
A registry is a data store for images, which store the layers of an image to make incremental containers more manageable and to decrease the size of pulls across the wire.
This means instead of waiting for your Ops team
Luckily, instead of manually linking all these containers together, you can use a tool call Docker Compose.
Getting Started with Docker Compose
Docker Compose allows you to orchestrate all of the moving parts of your app from a single YAML configuration file. This means, instead of running a node with ElasticSearch, a node with Redis, a node with a Rails app, you can run all these services in a contained, organized fashion on the same node. Yes, there are times when these should be setup in HA or across nodes for fault purposes, but this is outside the scope of this article.
For example, lets use ElasticSearch, a Go app and an nginx proxy. The nginx proxy will receive the inbound HTTP/S connections, proxy them to Go, and ElasticSearch as the backend datastore.
Building an Example
Let’s start with the following docker-compose.yml:#
# Runs an NGINX proxy
proxy:
build: “proxy/”
ports:
— “80:80”
— “443:443”
volumes_from:
— web
links:
— web
labels:
— “name=nginx-proxy”#
# Runs the webapp
web:
build: “web/”
ports:
— “8383:8383”
volumes:
— /tmp:/tmp
links:
— elasticsearch
labels:
— “name=webapp”#
# Runs an elasticsearch instance for storing data
elasticsearch:
image: “elasticsearch:latest”
volumes:
— /var/data/es:/usr/share/elasticsearch/data
ports:
— “9200:9200”
— “9300:9300”
For the sake of brevity, web/ and proxy/ each have their own Dockerfile, customized to your needs each of which contain their own magic. However, the piece of configuration to notice is the links and volumes_from directives.
With all the configurations in place, you can run one command that would create all of your containers based off the docker-compose.yml file, with volumes and links in place.docker-compose -f docker-compose.yml build
Mike Mackintosh InfoSec Journals: These are old articles I’ve written but never posted because they were either incomplete or I was too critical of them. Thought it would be fun to share them.
Member discussion