Welcome Docker

I joke sometimes that I am “too old for this stuff” when it comes to technology. It’s a half truth though. I remember the days that deploying prod changes meant connecting to the remote server in Dreameaver and saving your file, or compiling the site code into an RPM/DEB package which would be pushed to an internal repository, and then pulled via Chef/Puppet, so on and so on.

Docker has truly simplified the deployment process, but I still draw similarities between containers and microkernels to that of Blu-ray and HDDVD. It really was a landslide because the prevailing technology had corporate backing. This means that when I deploy my code, it goes into a container, pushed to a registry and then orchestrated into production using something like ECS or Kubernetes. Making changes is faster, and I have more confidence since my rollback is to take image:n-2 and tag it as latest, and re-push.

Making changes is faster, and I have more confidence

Streamlining the Build Process

Most of my projects are React + Go, which is extremely common these days. The one thing I will say since getting an M1 Macbook Air though is that some native extensions can cause a wrench in your build process. For example, if you build a native node module, or something with CGO, but your runtime environment is different then you will get architecture mismatch or build failures. Building your React and Go from the container it will run in production is the simplest and most consistent environment in my opinion you’ll be able to achieve.

Below you’ll find my normal boilerplate Dockerfile. However, to understand why it works, we need to understand the directory structure. React is stored in-line with the project under the web/ directory.

Within main.go we use the embed package to include the bundled JavaScript using something like //go:embed web/build. Due to the project structure, we call the web and build subfolders explicitly.

This means that we have an expectation that the production React artifacts exist within web/build/. This directory must exist for go build to be successful. Once the Go binary is built, it will locally embed the required files, and serve them via an http.ListenAndServe method. This allows us to run one binary, and no special HAProxy, Nginx or Apache server configuration; it will just work.

Breaking It Down

We have two build containers above, one for NodeJS and one for Golang. Since there is a build dependency for the Go project where it expects the built React bundles to exist, we create a node-build image. Installing make, g++, and python via apk will save headache on the M1 mac’s due to compatibility and architecture issues.

Once Alpine is ready to build, we copy the web/ working directory to the image, and use yarn to install and build the project.

Once the NodeJS project is completed, we move onto the Go side. We do the same thing where we build a Go-build image. This uses go mod to download the dependencies and copy the necessary files from the node-build image. We place them back into the web/build directory, and run go build.

Lastly, we use a fresh image of Alpine, and copy the bundled Go binary, and set it as the entrypoint. This approach has solved a lot of headache, allowed me to boilerplate a lot of code, and remove a lot of insecurity from the build process.

Your Turn!

What is your go-to Dockerfile for building and pushing to prod?

Share this post