I’ve got a heap of VMs running in my homelab. I have a virtual host that can handle it, but I’m getting close to consuming all of its RAM and so I figured I should look at using Docker again to free up some resources.
Services like Sickbeard, Couchpotato, DNS, DHCP, and so on all don’t really need a whole VM devoted to them each (they use like a hundred Mbs of RAM on average), so to address this I’m going to create one or two application servers with these services just running in containers.
I built a minimal CentOS 7 VM, and called it app1.example.com. I gave it 4GB of RAM and 30GB of HDD, and I added this hostname to DNS as well. I installed
yum-cron for auto-updates, and
cifs-utils to connect it to my media server:
By default SELinux will interfere with some of the containers, so for now I’ll just turn it off until I can figure out the necessary policies to keep it happy:
setenforce 0 will mean SELinux will start enforcing again on next reboot)
Then I installed and start
docker, straight from the standard CentOS repositories:
yum install docker -y
Next I need to define somewhere to store the docker configuration and persistent data that each container will create. I’ve chosen
mkdir -p /var/docker/data/
Now, to make things a little easier, I’ve used docker-compose to manage my containers. You could do this without compose but I like the simplicity of the configuration this way. It’s easily installed with
yum install epel-release -y && yum install python-pip -y
We need to pick somewhere for the docker compose files too, so those’ll go into the same general area:
mkdir -p /var/docker/compose/
I’m going to have containers that are going to need access to my photos and music and such, so I need to have a user and group defined that can access this share. I also need to store credentials to mount this network share, and in this instance I’m going to connect to it using
useradd -u 5000 mediaserver
I explicitly set the UID and GID of the
mediagroup so as to make it easier to be defined in the container configuration, which you’ll see further down.
I’m going to start with Sickbeard to “containerize”.
First we need to define where the persistent data for it will live. We defined an overall directory earlier, but we need to segregate it into it’s own area too for management:
mkdir -p /var/docker/data/sickbeard/
When I first tried this, I found a great docker hub image that seemed like it would do the job straight out of the box. Unfortunately one tiny piece of it didn’t work in my setup, so I copied what it did instead and tweaked it to define my own image instead.
First thing to do is create the dockerfile itself, as
/var/docker/dockerfiles/sickbeard/dockerfile with the following contents:
This is exactly the same as Dominique’s, but I changed the maintainer name as I’m managing it myself from now on.
Next is the
sickbeard.sh file that is run in that dockerfile, which lives next to the dockerfile as
This is the script I had to tweak. Dominique’s has a step where it tries to change the permissions of the /media directory. Because mine is part of a cifs share and the permissions are managed in the
fstab file it would fail to complete. I know the permissions are correct already for the share, so I’ve omitted it from the script to make it work.
Lastly, because we’re using
docker-compose we need to define the sickbeard service in the compose yaml file. So, in
context: configuration will let docker-compose know to look there for the
dockerfile instead of an image in the hub.
Now, you can start the machine and see if it builds and runs:
docker-compose up with no parameters will start the machines and attach you to their outputs, and by being in the same directory as the yaml file it’ll automatically use it. This is useful for the first run to make sure they start ok. To run them in the background, kill the running ones with
ctrl+c and instead run:
docker-compose up -d
And that’s it! You should be able to connect to http://app1.example.com:8081 to view the SickBeard interface.
We now have a rough pattern defined for creating containers, so let’s make a CouchPotato container as well in the same way. First setting it up where it’ll live:
mkdir -p /var/docker/data/couchpotato/
Then the dockerfile:
Then finally adjusting the
docker-compose.yml file to include couchpotato:
docker-compose up again to make sure it builds ok and then run
docker-compose up -d to run it in the background.
Creating a DHCP server is just as easy. Setting it up where the config will live:
mkdir -p /var/docker/data/dhcp/
Creating the DHCP config in
The updated yaml file:
This time I’m just using a docker hub image, as the DHCP server is a lot simpler to define than the other images.
You might’ve also noticed the
network_mode: host directive. This ensures that the DHCP server is using the application server’s network adapters instead of an isolated network stack that docker would normally create.
Let’s do one more.
Pi-hole is a great DNS service you can run to block ads and telemetry services across your whole network.
This is probably the easiest container to configure, as I personally don’t actually care to store any configuration about the service at all. I’m happy to blow the container away and rebuilt it whenever, so this is all I had to change the yaml file to to get it going:
This is the great feature of containers. Because we now have all of these services isolated from their data, I actually don’t care about the applications any more and our data just lives in a single directory.
So I can run:
# Stop all the containers:
and I’ve got every container’s configuration and data nicely bundled up in a single tarball.
If I have to rebuild these containers, I just need to:
- Set up another new app server (there’s really no need to back up the current one).
- Restore the
cd /var/docker/compose && docker-compose up -dand be off running with the exact same applications running as before.
Containers are great, but I’ve found some applications don’t really like running in those contexts yet (or they do but I haven’t yet figured out the right configuration for them). For these small, basic services I can group them together into these dedicated app servers, and leave the bigger, legacy applications running in their own VMs for now.
Also I built my app server with CentOS 7, however this really would work on any OS that supports docker. You’d just need to adjust the installation part to suit the OS of your choice.