Docker minimal deployment setup

Docker minimal deployment setup

I got into a situation where I got a minimal setup, fairly small VPS for a test/staging server which was behind a VPN. For a project I needed to deploy frontend services but rather than a Kubernetes setup I needed to get the same basic features on a simple bare VPS where I wasn’t even root at.

Couple of notes before hand;

Yes, this could be done way more graceful than this, but as we all know, done is better than perfect. Also one, the deployment process is far from perfect and does fail from time to time, usually running the deployment step fixes it. Also two, this isn’t the production env which has to be up all the time so deleted and recreating services is good enough.

Requirements

  • automated deployment
  • deploy directly from GitLab
  • review deployments (non-master branch)
  • SSL

Not that bad but Kubernetes would do this stuff for you and it has a really nice integration into Gitlab.

So I had a choice to make on how to do it, now I could go with running Rancher or a Gitlab Runner on the server but given the performance of the VPS and the requirements this seems to be a fair bit of overkill. Eventually, after some testing of course, I went with Traefik and quick-and-dirty SSH to execute docker command directly/manually.

Traefik

The setup I did was via docker-compose.yml, running Traefik, enabling SSL via Lets encrypt and making the dashboard available with a basic web auth.

version: "3.3"

services:
  traefik:
    image: traefik:v2.4
    container_name: "traefik"
    networks:
      - traefik-proxy
    command:
      - "--log.level=DEBUG"
      - "--api.insecure=true"
      - "--providers.docker=true"

      # Do not expose containers unless explicitly told so
      - "--providers.docker.exposedbydefault=false"
      
      # Traefik will listen to incoming request on the port 80 (http)
      - "--entrypoints.http.address=:80"
      # Traefik will listen to incoming request on the port 443 (https)
      - "--entrypoints.https.address=:443"

      # tlschallenge
      # - "--certificatesresolvers.myresolver.acme.tlschallenge=true"

      # OR! httpchallenge
      - "--certificatesresolvers.myresolver.acme.httpchallenge=true"
      - "--certificatesresolvers.myresolver.acme.httpchallenge.entrypoint=http"
      
      # Tell to store the certificate on a path under our volume
      - "--certificatesresolvers.myresolver.acme.storage=./letsencrypt/acme.json"
      - "--certificatesresolvers.myresolver.acme.email=CHANGE_ME@EXAMPLE.COM"
    ports:
      - "80:80"
      - "443:443"
    restart: unless-stopped
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
      - "/etc/timezone:/etc/timezone:ro"
      - "./traefik/letsencrypt:/letsencrypt"
      - "./traefik/basic_auth:/basic_auth"
    labels:
      - "com.centurylinklabs.watchtower.enable=true"
      - "traefik.enable=true"
      - "traefik.docker.network=traefik-proxy"
      - "traefik.http.routers.traefik.entrypoints=http"
      - "traefik.http.routers.traefik.rule=Host(`traefik.staging.EXAMPLE.COM`)"
      
      - "traefik.http.middlewares.traefik-auth.basicauth.users=admin:$$apr1$$003i2fSV$$hTJe8DY5Pq.XXXXXXXX."
      - "traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https"
      
      - "traefik.http.routers.traefik.middlewares=traefik-https-redirect"
      - "traefik.http.routers.traefik-secure.entrypoints=https"
      - "traefik.http.routers.traefik-secure.rule=Host(`traefik.staging.EXAMPLE.COM`)"
      - "traefik.http.routers.traefik-secure.middlewares=traefik-auth"
      - "traefik.http.routers.traefik-secure.tls=true"
      - "traefik.http.routers.traefik-secure.tls.certresolver=myresolver"
      - "traefik.http.routers.traefik-secure.service=api@internal"

Problems to solve

  • connecting to the VPN
  • connecting to the VPS
  • deployment itself

VPN

So in a normal VPN this shouldn’t be that big of a deal, you use an image that has a VPN client in it and that’s that. Sadly that wasn’t the case here, this version uses a base code combined with an OTP changing every 30 or 60 seconds.

Deployment image

Breaking it down; in order to be able to deploy at all we need to connect to the VPN first, in order to do that we need a password which we need to generate. This is where pyotp comes in, this is able to generate the OTP code.

Dockerfile

Prepping the image, since I use Ubuntu most I simply went with that, yes I am aware this could probably be done smaller with Alpine for example.

FROM ubuntu:20.04

# Install openvpn if not available.
RUN which openvpn || (apt-get update -y -qq && apt-get install -y -qq openvpn)

# Install ssh-agent if not available.
RUN which ssh-agent || (apt-get update -y -qq && apt-get install openssh-client -y -qq)

# Install ping if not available.
RUN which ping || (apt-get update -y -qq && apt-get install inetutils-ping -y -qq)

RUN apt install -y python3 python3-distutils
ADD https://bootstrap.pypa.io/get-pip.py get-pip.py
RUN python3 get-pip.py
RUN pip install pyotp

build.sh

The script that is run on build-time (of the project) to actually create the image. This is a vastly stripped version of the official version of Gitlab itself, build.sh which is part of their auto-build image.

#!/bin/bash -e

# build stage script for Auto-DevOps

if ! docker info &>/dev/null; then
  if [ -z "$DOCKER_HOST" ] && [ "$KUBERNETES_PORT" ]; then
    export DOCKER_HOST='tcp://localhost:2375'
  fi
fi

if [[ -n "$CI_REGISTRY" && -n "$CI_REGISTRY_USER" ]]; then
  echo "Logging in to GitLab Container Registry with CI credentials..."
  echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY"
fi

image_previous="$CI_APPLICATION_REPOSITORY:$CI_COMMIT_BEFORE_SHA"
image_tagged="$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG"
# image_latest="$CI_APPLICATION_REPOSITORY:latest"

if [[ "$AUTO_DEVOPS_BUILD_IMAGE_CNB_ENABLED" != "false" && ! -f Dockerfile && -z "${DOCKERFILE_PATH}" ]]; then
  builder=${AUTO_DEVOPS_BUILD_IMAGE_CNB_BUILDER:-"heroku/buildpacks:18"}
  echo "Building Cloud Native Buildpack-based application with builder ${builder}..."
  buildpack_args=()
  if [[ -n "$BUILDPACK_URL" ]]; then
    buildpack_args=('--buildpack' "$BUILDPACK_URL")
  fi
  env_args=()
  if [[ -n "$AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES" ]]; then
    mapfile -t env_arg_names < <(echo "$AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES" | tr ',' "\n")
    for env_arg_name in "${env_arg_names[@]}"; do
      env_args+=('--env' "$env_arg_name")
    done
  fi
  pack build tmp-cnb-image \
    --builder "$builder" \
    "${env_args[@]}" \
    "${buildpack_args[@]}" \
    --env HTTP_PROXY \
    --env http_proxy \
    --env HTTPS_PROXY \
    --env https_proxy \
    --env FTP_PROXY \
    --env ftp_proxy \
    --env NO_PROXY \
    --env no_proxy

  cp /build/cnb.Dockerfile Dockerfile

  docker build \
    --build-arg source_image=tmp-cnb-image \
    --tag "$image_tagged" \
    .

  docker push "$image_tagged"
  exit 0
fi

if [[ -n "${DOCKERFILE_PATH}" ]]; then
  echo "Building Dockerfile-based application using '${DOCKERFILE_PATH}'..."
else
  export DOCKERFILE_PATH="Dockerfile"

  if [[ -f "${DOCKERFILE_PATH}" ]]; then
    echo "Building Dockerfile-based application..."
  else
    echo "Building Heroku-based application using gliderlabs/herokuish docker image..."
    erb -T - /build/Dockerfile.erb > "${DOCKERFILE_PATH}"
  fi
fi

if [[ ! -f "${DOCKERFILE_PATH}" ]]; then
  echo "Unable to find '${DOCKERFILE_PATH}'. Exiting..." >&2
  exit 1
fi

build_secret_args=''
if [[ -n "$AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES" ]]; then
  build_secret_file_path=/tmp/auto-devops-build-secrets
  "$(dirname "$0")"/export-build-secrets > "$build_secret_file_path"
  build_secret_args="--secret id=auto-devops-build-secrets,src=$build_secret_file_path"

  echo 'Activating Docker BuildKit to forward CI variables with --secret'
  export DOCKER_BUILDKIT=1
fi

echo "Attempting to pull a previously built image for use with --cache-from..."
docker image pull --quiet "$image_previous" || \
  echo "No previously cached image found. The docker build will proceed without using a cached image"

# shellcheck disable=SC2154 # missing variable warning for the lowercase variables
# shellcheck disable=SC2086 # double quoting for globbing warning for $build_secret_args and $AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS
docker build \
  --cache-from "$image_previous" \
  $build_secret_args \
  -f "$DOCKERFILE_PATH" \
  --build-arg BUILDPACK_URL="$BUILDPACK_URL" \
  --build-arg HTTP_PROXY="$HTTP_PROXY" \
  --build-arg http_proxy="$http_proxy" \
  --build-arg HTTPS_PROXY="$HTTPS_PROXY" \
  --build-arg https_proxy="$https_proxy" \
  --build-arg FTP_PROXY="$FTP_PROXY" \
  --build-arg ftp_proxy="$ftp_proxy" \
  --build-arg NO_PROXY="$NO_PROXY" \
  --build-arg no_proxy="$no_proxy" \
  $AUTO_DEVOPS_BUILD_IMAGE_EXTRA_ARGS \
  --tag "$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG" \
  .

docker push "$image_tagged"

Since I only have a single project running on this server I simply included this in a separate build directory in that project. Should the server run multiple projects I would move this to a separate project and simply include/use it in the project deployments.

build-deploy:
  stage: build
  image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-build-image:v0.4.0"
  variables:
    DOCKER_TLS_CERTDIR: ""
  services:
    - docker:19.03.12-dind
  script:
    - |
      export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}
      export CI_APPLICATION_TAG=deploy-image      
    - cd deploy
    - chmod +x build.sh
    - ./build.sh
  only:
    changes:
      - deploy/Dockerfile
      - deploy/build.sh

Deployment

Trying to stay as close to the default Gitlab setup I override the .auto-deploy yaml entry. The full original can be found at Jobs/Build.gitlab-ci.yml .

To use it you include the following, usually simply at the end of the gitlab-ci.yml.

include:
  - template: Jobs/Build.gitlab-ci.yml

auto-deploy

Keep in mind, most of the variables you find in the yaml are set in the project CI/CD variables.

Breaking this down, we use the previously described build image and make it;

  1. generate the OTP
  2. connect to the VPN
  3. ping the test server to check the connection
  4. prep the SSH folders and keys
.auto-deploy:
  stage: deploy
  image: registry.gitlab.com/MY_PROJECT/frontend:deploy-image
#  services:
#    - docker:dind
  before_script:
    - OVPN_OTP_PASS=$(python3 -c 'import pyotp; print(pyotp.TOTP("'$OVPN_PASS'").now())');
    - OVPN_OTP_PASS="$OVPN_USER_CODE$OVPN_OTP_PASS";

    ##
    ## VPN
    ## Content from Variables to files: https://stackoverflow.com/a/49418265/4396362
    ## Waiting for openvpn connect would be better than sleeping,
    ## the closest would be https://askubuntu.com/questions/28733/how-do-i-run-a-script-after-openvpn-has-connected-successfully
    ## Maybe this would work https://unix.stackexchange.com/questions/403202/create-bash-script-to-wait-and-then-run
    ##
    - cat <<< $OVPN_CONFIG > /etc/openvpn/client.conf # Move vpn config from gitlab variable to config file.
    - cat <<< $OVPN_USER > /etc/openvpn/pass.txt # Move vpn user from gitlab variable to pass file.
    - cat <<< $OVPN_OTP_PASS >> /etc/openvpn/pass.txt # Move vpn password from gitlab variable to pass file.
    - cat <<< "auth-user-pass /etc/openvpn/pass.txt" >> /etc/openvpn/client.conf # Tell vpn config to use password file.
    - cat <<< "log /etc/openvpn/client.log" >> /etc/openvpn/client.conf # Tell vpn config to use log file.
    - openvpn --config /etc/openvpn/client.conf --daemon # Start openvpn with config as a deamon.
    - sleep 30s # Wait for some time so the vpn can connect before doing anything else.
    - cat /etc/openvpn/client.log # Print the vpn log.
    - ping -c 1 $TARGET_SERVER # Ping the server I want to deploy to. If not available this stops the deployment process.

    ##
    ## SSH
    ## Inspiration for gitlab from https://docs.gitlab.com/ee/ci/ssh_keys/
    ## Inspiration for new key from https://www.thomas-krenn.com/de/wiki/OpenSSH_Public_Key_Authentifizierung_unter_Ubuntu
    ##
    - eval $(ssh-agent -s) # Run ssh-agent.
    - mkdir -p ~/.ssh # Create ssh directory.
    - cat <<< $SSH_PRIVATE_KEY > ~/.ssh/id_rsa # Move ssh key from gitlab variable to file.
    - chmod 700 ~/.ssh/id_rsa  # Set permissions so only I am allowed to access my ssh key.
    - ssh-add # Add the key (no params -> default file name assumed).
    - cat <<< $SSH_KNOWN_HOSTS > ~/.ssh/known_hosts # Add the servers SSH Key to known_hosts

Deploying

Breaking it down;

  1. set variables
    1. service name, which is based on the project name and the commit slug . Using the commit slug is basically the branch or tag name but cleaned
  2. SSH into the target server
  3. pull the docker image
  4. check if the container already exists, if so stop and delete it
  5. create the new container
review:
  extends: .auto-deploy
  stage: review
  script:
    - |
      if [[ -z "$CI_COMMIT_TAG" ]]; then
        export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG}
        export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA}
      else
        export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}
        export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG}
      fi      
    - |
      [[ "$TRACE" ]] && set -x
      export INGRESS_DOMAIN="$CI_COMMIT_REF_SLUG.staging.$INGRESS_BASE_DOMAIN"
      export SERVICE_NAME="$CI_PROJECT_NAME-$CI_COMMIT_REF_SLUG"

      ssh $SSH_USER@${TARGET_SERVER} << EOF
        source ~/.bashrc
        set -x

        docker pull ${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}:$CI_APPLICATION_TAG;
        if [ ! "\$\(docker ps -q -f name=$SERVICE_NAME\)" ]; then
          echo "Service $SERVICE_NAME not found, creating it";
        else
          docker stop $SERVICE_NAME
          if [ ! "\$(docker ps -aq -f status=exited -f name=$SERVICE_NAME)" ]; then
            echo "unable to stop service: $SERVICE_NAME";
            docker kill $SERVICE_NAME;
          fi
          docker rm $SERVICE_NAME
        fi
        docker run -d \
          --name $SERVICE_NAME \
          --network="traefik-proxy" \
          --env WEBPACK_WATCH=0 \
          --label "traefik.enable=true" \
          --label "traefik.docker.network=traefik-proxy" \
          --label "traefik.http.routers.$SERVICE_NAME.rule=Host(\\\`$INGRESS_DOMAIN\\\`)" \
          --label "traefik.http.routers.$SERVICE_NAME.entrypoints=http" \
          --label "traefik.http.routers.$SERVICE_NAME.middlewares=$SERVICE_NAME-https" \
          --label "traefik.http.routers.$SERVICE_NAME-https.rule=Host(\\\`$INGRESS_DOMAIN\\\`)" \
          --label "traefik.http.routers.$SERVICE_NAME-https.entrypoints=https" \
          --label "traefik.http.routers.$SERVICE_NAME-https.tls=true" \
          --label "traefik.http.routers.$SERVICE_NAME-https.tls.certresolver=myresolver" \
          --label "traefik.http.routers.$SERVICE_NAME-https.middlewares=$SERVICE_NAME-auth" \
          --label "traefik.http.middlewares.$SERVICE_NAME-https.redirectscheme.scheme=https" \
          --label "traefik.http.middlewares.$SERVICE_NAME-auth.basicauth.users=admin:\\\$apr1\\\$003i2fSV\\\$hTJe8DY5Pq.6XXXXXXXw." \
          --label "traefik.http.middlewares.$SERVICE_NAME-https.redirectscheme.scheme=https" \
          --label "traefik.http.services.$SERVICE_NAME.loadbalancer.server.port=3004" \
          ${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}:$CI_APPLICATION_TAG;
      EOF
      echo "https://$INGRESS_DOMAIN" > environment_url.txt      
  environment:
    name: review/$CI_COMMIT_REF_NAME
    url: https://$CI_COMMIT_BRANCH.staging.$INGRESS_BASE_DOMAIN
    on_stop: stop_review
  artifacts:
    paths: [environment_url.txt, tiller.log]
    when: always
  rules:
    - if: '$CI_KUBERNETES_ACTIVE != null && $CI_KUBERNETES_ACTIVE != ""'
      when: never
    - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
      when: never
    - if: '$REVIEW_DISABLED'
      when: never
    - if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH'

Stopping a deployment

Much like the deployment step but rather than creating a new one we simply delete the container.

stop_review:
  extends: .auto-deploy
  stage: cleanup
  variables:
    GIT_STRATEGY: none
  script:
    - |
      if [[ -z "$CI_COMMIT_TAG" ]]; then
        export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG}
        export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA}
      else
        export CI_APPLICATION_REPOSITORY=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}
        export CI_APPLICATION_TAG=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG}
      fi      
    - |
      [[ "$TRACE" ]] && set -x
      if [[ -n "$CI_COMMIT_BRANCH" ]]; then
        export SERVICE_NAME="$CI_PROJECT_NAME-$CI_COMMIT_BRANCH"
      else
        export SERVICE_NAME="$CI_PROJECT_NAME-$CI_COMMIT_TAG"
      fi

      ssh $SSH_USER@$TARGET_SERVER << EOF
        source ~/.bashrc
        set -x

        docker pull ${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}:$CI_APPLICATION_TAG;
        if [ ! "\$\(docker ps -q -f name=$SERVICE_NAME\)" ]; then
          echo "Service $SERVICE_NAME not found...";
        else
          docker stop $SERVICE_NAME
          if [ ! "\$(docker ps -aq -f status=exited -f name=$SERVICE_NAME)" ]; then
            echo "unable to stop service: $SERVICE_NAME";
            docker kill $SERVICE_NAME;
          fi
          docker rm $SERVICE_NAME
        fi
      EOF      
  environment:
    name: review/$CI_COMMIT_REF_NAME
    action: stop
  allow_failure: true
  rules:
    - if: '$CI_KUBERNETES_ACTIVE != null && $CI_KUBERNETES_ACTIVE != ""'
      when: never
    - if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
      when: never
    - if: '$REVIEW_DISABLED'
      when: never
    - if: '$CI_COMMIT_TAG || $CI_COMMIT_BRANCH'
      when: manual

See also