Tim's Blog

Information, Technology, Security, and other stuff.

Migrating Homelab Services to Docker

Published 2016-09-26

I've got a heap of VMs running in my homelab. I have a virtual host that can handle it, but I'm getting close to consuming all of its RAM and so I figured I should look at using Docker again to free up some resources.

Services like Sickbeard, Couchpotato, DNS, DHCP, and so on all don't really need a whole VM devoted to them each (they use like a hundred Mbs of RAM on average), so to address this I'm going to create one or two application servers with these services just running in containers.

Building the App Server

I built a minimal CentOS 7 VM, and called it app1.example.com. I gave it 4GB of RAM and 30GB of HDD, and I added this hostname to DNS as well. I installed yum-cron for auto-updates, and cifs-utils to connect it to my media server:

sudo su
yum update -y
yum install yum-cron cifs-utils -y
sed -i 's/apply_updates = no/apply_updates = yes/g' /etc/yum/yum-cron.conf
systemctl start yum-cron && systemctl enable yum-cron

By default SELinux will interfere with some of the containers, so for now I'll just turn it off until I can figure out the necessary policies to keep it happy:

setenforce 0

(note: using setenforce 0 will mean SELinux will start enforcing again on next reboot)

Then I installed and start docker, straight from the standard CentOS repositories:

yum install docker -y
systemctl start docker && systemctl enable docker

Next I need to define somewhere to store the docker configuration and persistent data that each container will create. I've chosen /var/docker/:

mkdir -p /var/docker/data/
mkdir -p /var/docker/compose/
mkdir -p /var/docker/dockerfiles/

Now, to make things a little easier, I've used docker-compose to manage my containers. You could do this without compose but I like the simplicity of the configuration this way. It's easily installed with pip:

yum install epel-release -y && yum install python-pip -y
pip install --upgrade pip
pip install -U docker-compose

We need to pick somewhere for the docker compose files too, so those'll go into the same general area:

mkdir -p /var/docker/compose/

Connecting my media server

I'm going to have containers that are going to need access to my photos and music and such, so I need to have a user and group defined that can access this share. I also need to store credentials to mount this network share, and in this instance I'm going to connect to it using cifs.

useradd -u 5000 mediaserver
groupadd -g 6000 mediagroup

mkdir -p /mnt/mediaserver

touch /root/smb-credentials
echo "username=CIFS_Service
password=N0t4u2SeE" >> /root/smb-credentials
chmod 0600 /root/smb-credentials
echo '//MediaServer/Share /mnt/mediaserver  cifs  uid=5000,gid=6000,rw,credentials=/root/smb-credentials,file_mode=0775,dir_mode=0775 0 0' >> /etc/fstab
mount -a

I explicitly set the UID and GID of the mediaserver and mediagroup so as to make it easier to be defined in the container configuration, which you'll see further down.

Setting up the SickBeard Container

I'm going to start with Sickbeard to "containerize".

First we need to define where the persistent data for it will live. We defined an overall directory earlier, but we need to segregate it into it's own area too for management:

mkdir -p /var/docker/data/sickbeard/
mkdir -p /var/docker/dockerfiles/sickbeard/
chown -R 5000:6000 /var/docker/data/sickbeard/

When I first tried this, I found a great docker hub image that seemed like it would do the job straight out of the box. Unfortunately one tiny piece of it didn't work in my setup, so I copied what it did instead and tweaked it to define my own image instead.

First thing to do is create the dockerfile itself, as /var/docker/dockerfiles/sickbeard/dockerfile with the following contents:

FROM debian:8
MAINTAINER Timothy Quinn

RUN groupadd -r -g 666 sickbeard \
  && useradd -r -u 666 -g 666 -d /sickbeard sickbeard

ADD sickbeard.sh /sickbeard.sh
RUN chmod 755 /sickbeard.sh

RUN export VERSION=master \
  && apt-get -q update \
  && apt-get install -qy curl ca-certificates python-cheetah python-openssl \
  && curl -o /tmp/sickbeard.tar.gz https://codeload.github.com/midgetspy/Sick-Beard/tar.gz/${VERSION} \
  && tar xzf /tmp/sickbeard.tar.gz \
  && mv Sick-Beard-* sickbeard \
  && chown -R sickbeard: sickbeard \
  && apt-get -y remove curl \
  && apt-get -y autoremove \
  && apt-get -y clean \
  && rm -rf /var/lib/apt/lists/* \
  && rm -rf /tmp/*

VOLUME ["/datadir", "/media"]

EXPOSE 8081

WORKDIR /sickbeard

CMD ["/sickbeard.sh"]

This is exactly the same as Dominique's, but I changed the maintainer name as I'm managing it myself from now on.

Next is the sickbeard.sh file that is run in that dockerfile, which lives next to the dockerfile as /var/docker/dockerfiles/sickbeard/sickbeard.sh

#!/bin/bash
set -e

USER="sickbeard"

echo "SickBeard settings"
echo "=================="
echo
echo "  User:     ${USER}"
echo "  UID:      ${SICKBEARD_UID:=666}"
echo "  GID:      ${SICKBEARD_GID:=666}"
echo
echo "  Config:   ${CONFIG:=/datadir/config.ini}"
echo

printf "Updating UID / GID... "
[[ $(id -u ${USER}) == ${SICKBEARD_UID} ]] || usermod -o -u ${SICKBEARD_UID} ${USER}
[[ $(id -g ${USER}) == ${SICKBEARD_GID} ]] || groupmod -o -g ${SICKBEARD_GID} ${USER}
echo "[DONE]"

printf "Set permissions..."
touch ${CONFIG}
chown -R ${USER}: /sickbeard
echo "[DONE]"

echo "Starting SickBeard..."
exec su -pc "./SickBeard.py --nolaunch --datadir=$(dirname ${CONFIG}) --config=${CONFIG}" ${USER}

This is the script I had to tweak. Dominique's has a step where it tries to change the permissions of the /media directory. Because mine is part of a cifs share and the permissions are managed in the fstab file it would fail to complete. I know the permissions are correct already for the share, so I've omitted it from the script to make it work.

Lastly, because we're using docker-compose we need to define the sickbeard service in the compose yaml file. So, in /var/docker/compose/docker-compose.yml:

version: '2'
services:
    sickbeard:
        build:
            context: /var/docker/dockerfiles/sickbeard/
        container_name: sickbeard
        volumes:
            - /var/docker/data/sickbeard/:/datadir
            - /mnt/mediaserver/:/media
        ports:
            - 8081:8081
        environment:
            - SICKBEARD_UID=5000
            - SICKBEARD_GID=6000
        restart: always

The context: configuration will let docker-compose know to look there for the dockerfile instead of an image in the hub.

Now, you can start the machine and see if it builds and runs:

cd /var/docker/compose/
docker-compose up

Running docker-compose up with no parameters will start the machines and attach you to their outputs, and by being in the same directory as the yaml file it'll automatically use it. This is useful for the first run to make sure they start ok. To run them in the background, kill the running ones with ctrl+c and instead run:

docker-compose up -d

And that's it! You should be able to connect to http://app1.example.com:8081 to view the SickBeard interface.

Setting up the CouchPotato Container

We now have a rough pattern defined for creating containers, so let's make a CouchPotato container as well in the same way. First setting it up where it'll live:

mkdir -p /var/docker/data/couchpotato/
mkdir -p /var/docker/dockerfiles/couchpotato/
chown -R 5000:6000 /var/docker/data/couchpotato/

Then the dockerfile:

FROM debian:8
MAINTAINER Timothy Quinn

RUN groupadd -r -g 666 couchpotato \
  && useradd -r -u 666 -g 666 -d /couchpotato couchpotato

ADD couchpotato.sh /couchpotato.sh
RUN chmod 755 /couchpotato.sh

RUN export VERSION=master \
  && apt-get -q update \
  && apt-get install -qy curl ca-certificates python-pip python-dev libz-dev libxml2-dev libxslt1-dev gcc \
  && curl -o /tmp/couchpotato.tar.gz https://codeload.github.com/CouchPotato/CouchPotatoServer/tar.gz/${VERSION} \
  && tar xzf /tmp/couchpotato.tar.gz \
  && mv CouchPotatoServer-* couchpotato \
  && chown -R couchpotato: couchpotato \
  && pip install cheetah lxml pyopenssl \
  && apt-get -y remove curl python-dev libz-dev libxml2-dev libxslt1-dev gcc \
  && apt-get -y autoremove \
  && apt-get -y clean \
  && rm -rf /var/lib/apt/lists/* \
  && rm -rf /tmp/*

VOLUME ["/datadir", "/media"]

EXPOSE 5050

WORKDIR /couchpotato

CMD ["/couchpotato.sh"]

Then couchpotato.sh:

#!/bin/bash
set -e

USER="couchpotato"

echo "CouchPotato Settings"
echo "===================="
echo
echo "  User:     ${USER}"
echo "  UID:      ${CP_UID:=666}"
echo "  GID:      ${CP_GID:=666}"
echo
echo "  Config:   ${CONFIG:=/datadir/config.ini}"
echo

printf "Updating UID / GID... "
[[ $(id -i ${USER}) == ${CP_UID} ]] || usermod -o -u ${CP_UID} ${USER}
[[ $(id -i ${USER}) == ${CP_GID} ]] || groupmod -o -g ${CP_GID} ${USER}
echo "[DONE]"

printf "Set permissions... "
touch ${CONFIG}
chown -R ${USER}: /couchpotato
chown -R ${USER}: /datadir $(dirname ${CONFIG})
echo "[DONE]"

CONFIG=${CONFIG:-/datadir/config.ini}
echo "Starting CouchPotato..."
exec su -pc "./CouchPotato.py --data_dir=$(dirname ${CONFIG}) --config_file=${CONFIG}" ${USER}

Then finally adjusting the docker-compose.yml file to include couchpotato:

version: '2'
services:
    couchpotato:
        build:
            context: /var/docker/dockerfiles/couchpotato/
        container_name: couchpotato
        volumes:
            - /var/docker/data/couchpotato/:/datadir
            - /mnt/mediaserver/:/media
        ports:
            - 5050:5050
        environment:
            - CP_UID=5000
            - CP_GID=6000
        restart: always
    sickbeard:
        build:
            context: /var/docker/dockerfiles/sickbeard/
        container_name: sickbeard
        volumes:
            - /var/docker/data/sickbeard/:/datadir
            - /mnt/mediaserver/:/media
        ports:
            - 8081:8081
        environment:
            - SICKBEARD_UID=5000
            - SICKBEARD_GID=6000
        restart: always

Run docker-compose up again to make sure it builds ok and then run docker-compose up -d to run it in the background.

Setting up a DHCP Container

Creating a DHCP server is just as easy. Setting it up where the config will live:

mkdir -p /var/docker/data/dhcp/

Creating the DHCP config in /var/docker/data/dhcp/dhcpd.conf:

authoritative;

log-facility local7;

subnet 192.168.20.0 netmask 255.255.255.0 {
  range 192.168.20.100 192.168.20.199;
  option domain-name-servers 192.168.20.2, 192.168.20.3;
  option domain-name "example.com";
  option routers 192.168.20.1;
  default-lease-time 600;
  max-lease-time 7200;
}

The updated yaml file:

version: '2'
services:
    couchpotato:
        build:
            context: /var/docker/dockerfiles/couchpotato/
        container_name: couchpotato
        volumes:
            - /var/docker/data/couchpotato/:/datadir
            - /mnt/mediaserver/:/media
        ports:
            - 5050:5050
        environment:
            - CP_UID=5000
            - CP_GID=6000
        restart: always
    sickbeard:
        build:
            context: /var/docker/dockerfiles/sickbeard/
        container_name: sickbeard
        volumes:
            - /var/docker/data/sickbeard/:/datadir
            - /mnt/mediaserver/:/media
        ports:
            - 8081:8081
        environment:
            - SICKBEARD_UID=5000
            - SICKBEARD_GID=6000
        restart: always
    dhcp:
        image: networkboot/dhcpd
        container_name: dhcp
        volumes:
            - /var/docker/data/dhcp:/data
        network_mode: host
        restart: always

This time I'm just using a docker hub image, as the DHCP server is a lot simpler to define than the other images.

You might've also noticed the network_mode: host directive. This ensures that the DHCP server is using the application server's network adapters instead of an isolated network stack that docker would normally create.

Let's do one more.

Setting up a Pi-hole Container

Pi-hole is a great DNS service you can run to block ads and telemetry services across your whole network.

This is probably the easiest container to configure, as I personally don't actually care to store any configuration about the service at all. I'm happy to blow the container away and rebuilt it whenever, so this is all I had to change the yaml file to to get it going:

version: '2'
services:
    couchpotato:
        build:
            context: /var/docker/dockerfiles/couchpotato/
        container_name: couchpotato
        volumes:
            - /var/docker/data/couchpotato/:/datadir
            - /mnt/mediaserver/:/media
        ports:
            - 5050:5050
        environment:
            - CP_UID=5000
            - CP_GID=6000
        restart: always
    sickbeard:
        build:
            context: /var/docker/dockerfiles/sickbeard/
        container_name: sickbeard
        volumes:
            - /var/docker/data/sickbeard/:/datadir
            - /mnt/mediaserver/:/media
        ports:
            - 8081:8081
        environment:
            - SICKBEARD_UID=5000
            - SICKBEARD_GID=6000
        restart: always
    dhcp:
        image: networkboot/dhcpd
        container_name: dhcp
        volumes:
            - /var/docker/data/dhcp:/data
        network_mode: host
        restart: always
    pihole:
        image: diginc/pi-hole
        container_name: pihole
        network_mode: host
        ports:
            - 53:53/tcp
            - 53:53/udp
            - 80:80
        cap_add:
            - NET_ADMIN
        environment:
            - DNS1=8.8.8.8
            - DNS2=8.8.4.4
            - ServerIP=192.168.20.6
        restart: always

Easy-peasy.

Backing up your Data

This is the great feature of containers. Because we now have all of these services isolated from their data, I actually don't care about the applications any more and our data just lives in a single directory.

So I can run:

# Stop all the containers:
docker-compose stop

# Backup the data:
tar -cpzf /app1-data.tar.gz /var/docker/

# Start all the containers:
docker-compose up -d

and I've got every container's configuration and data nicely bundled up in a single tarball.

If I have to rebuild these containers, I just need to:

  1. Set up another new app server (there's really no need to back up the current one).
  2. Install docker and docker-compose.
  3. Restore the /var/docker/ directory.
  4. Run cd /var/docker/compose && docker-compose up -d and be off running with the exact same applications running as before.

Containers are great, but I've found some applications don't really like running in those contexts yet (or they do but I haven't yet figured out the right configuration for them). For these small, basic services I can group them together into these dedicated app servers, and leave the bigger, legacy applications running in their own VMs for now.

Also I built my app server with CentOS 7, however this really would work on any OS that supports docker. You'd just need to adjust the installation part to suit the OS of your choice.