Migrating Homelab Services to Docker
I’ve got a heap of VMs running in my homelab. I have a virtual host that can handle it, but I’m getting close to consuming all of its RAM and so I figured I should look at using Docker again to free up some resources.
Services like Sickbeard, Couchpotato, DNS, DHCP, and so on all don’t really need a whole VM devoted to them each (they use like a hundred Mbs of RAM on average), so to address this I’m going to create one or two application servers with these services just running in containers.
Building the App Server
I built a minimal CentOS 7 VM, and called it app1.example.com. I gave it 4GB of RAM and 30GB of HDD, and I added this hostname to DNS as well. I installed yum-cron
for auto-updates, and cifs-utils
to connect it to my media server:
1 | sudo su |
By default SELinux will interfere with some of the containers, so for now I’ll just turn it off until I can figure out the necessary policies to keep it happy:
1 | setenforce 0 |
(note: using setenforce 0
will mean SELinux will start enforcing again on next reboot)
Then I installed and start docker
, straight from the standard CentOS repositories:
1 | yum install docker -y |
Next I need to define somewhere to store the docker configuration and persistent data that each container will create. I’ve chosen /var/docker/
:
1 | mkdir -p /var/docker/data/ |
Now, to make things a little easier, I’ve used docker-compose to manage my containers. You could do this without compose but I like the simplicity of the configuration this way. It’s easily installed with pip
:
1 | yum install epel-release -y && yum install python-pip -y |
We need to pick somewhere for the docker compose files too, so those’ll go into the same general area:
1 | mkdir -p /var/docker/compose/ |
Connecting my media server
I’m going to have containers that are going to need access to my photos and music and such, so I need to have a user and group defined that can access this share. I also need to store credentials to mount this network share, and in this instance I’m going to connect to it using cifs
.
1 | useradd -u 5000 mediaserver |
I explicitly set the UID and GID of the mediaserver
and mediagroup
so as to make it easier to be defined in the container configuration, which you’ll see further down.
Setting up the SickBeard Container
I’m going to start with Sickbeard to “containerize”.
First we need to define where the persistent data for it will live. We defined an overall directory earlier, but we need to segregate it into it’s own area too for management:
1 | mkdir -p /var/docker/data/sickbeard/ |
When I first tried this, I found a great docker hub image that seemed like it would do the job straight out of the box. Unfortunately one tiny piece of it didn’t work in my setup, so I copied what it did instead and tweaked it to define my own image instead.
First thing to do is create the dockerfile itself, as /var/docker/dockerfiles/sickbeard/dockerfile
with the following contents:
1 | FROM debian:8 |
This is exactly the same as Dominique’s, but I changed the maintainer name as I’m managing it myself from now on.
Next is the sickbeard.sh
file that is run in that dockerfile, which lives next to the dockerfile as /var/docker/dockerfiles/sickbeard/sickbeard.sh
1 |
|
This is the script I had to tweak. Dominique’s has a step where it tries to change the permissions of the /media directory. Because mine is part of a cifs share and the permissions are managed in the fstab
file it would fail to complete. I know the permissions are correct already for the share, so I’ve omitted it from the script to make it work.
Lastly, because we’re using docker-compose
we need to define the sickbeard service in the compose yaml file. So, in /var/docker/compose/docker-compose.yml
:
1 | version: '2' |
The context:
configuration will let docker-compose know to look there for the dockerfile
instead of an image in the hub.
Now, you can start the machine and see if it builds and runs:
1 | cd /var/docker/compose/ |
Running docker-compose up
with no parameters will start the machines and attach you to their outputs, and by being in the same directory as the yaml file it’ll automatically use it. This is useful for the first run to make sure they start ok. To run them in the background, kill the running ones with ctrl+c
and instead run:
1 | docker-compose up -d |
And that’s it! You should be able to connect to http://app1.example.com:8081 to view the SickBeard interface.
Setting up the CouchPotato Container
We now have a rough pattern defined for creating containers, so let’s make a CouchPotato container as well in the same way. First setting it up where it’ll live:
1 | mkdir -p /var/docker/data/couchpotato/ |
Then the dockerfile:
1 | FROM debian:8 |
Then couchpotato.sh
:
1 |
|
Then finally adjusting the docker-compose.yml
file to include couchpotato:
1 | version: '2' |
Run docker-compose up
again to make sure it builds ok and then run docker-compose up -d
to run it in the background.
Setting up a DHCP Container
Creating a DHCP server is just as easy. Setting it up where the config will live:
1 | mkdir -p /var/docker/data/dhcp/ |
Creating the DHCP config in /var/docker/data/dhcp/dhcpd.conf
:
1 | authoritative; |
The updated yaml file:
1 | version: '2' |
This time I’m just using a docker hub image, as the DHCP server is a lot simpler to define than the other images.
You might’ve also noticed the network_mode: host
directive. This ensures that the DHCP server is using the application server’s network adapters instead of an isolated network stack that docker would normally create.
Let’s do one more.
Setting up a Pi-hole Container
Pi-hole is a great DNS service you can run to block ads and telemetry services across your whole network.
This is probably the easiest container to configure, as I personally don’t actually care to store any configuration about the service at all. I’m happy to blow the container away and rebuilt it whenever, so this is all I had to change the yaml file to to get it going:
1 | version: '2' |
Easy-peasy.
Backing up your Data
This is the great feature of containers. Because we now have all of these services isolated from their data, I actually don’t care about the applications any more and our data just lives in a single directory.
So I can run:
1 | # Stop all the containers: |
and I’ve got every container’s configuration and data nicely bundled up in a single tarball.
If I have to rebuild these containers, I just need to:
- Set up another new app server (there’s really no need to back up the current one).
- Install
docker
anddocker-compose
. - Restore the
/var/docker/
directory. - Run
cd /var/docker/compose && docker-compose up -d
and be off running with the exact same applications running as before.
Containers are great, but I’ve found some applications don’t really like running in those contexts yet (or they do but I haven’t yet figured out the right configuration for them). For these small, basic services I can group them together into these dedicated app servers, and leave the bigger, legacy applications running in their own VMs for now.
Also I built my app server with CentOS 7, however this really would work on any OS that supports docker. You’d just need to adjust the installation part to suit the OS of your choice.