The fifth of a 12 part series on how to use Twelve-Factor App in practice. I will be explaining and implementing https://12factor.net/build-release-run based on my experience on how to deal with pipelines.
Build, release, run Pipelines
While the article from Twelve-Factor app is great in theory it misses a few components that we should talk about before putting this to practice. I.m.h.o; build, release and run is only part of it, this needs to be expanded upon.
In the real world we call this a pipeline, a set of stages, each with their own jobs which may depend on each other.
We have two options here, the way to go depends on your runtime.
- you’re building an app or service to publish
- you’re building a service to run
The frame of mind is the same but the output is different; build, test, deploy. If you create an app to publish you will not be running it yourself and your publish/release step may be PyPi on the other hand, if you’re creating a service, an API for example that you actually run the publish/release step may be a server, Docker env, Kubernetes, etc…
For this example we’ll be using a Python service to demonstrate the implementation. Let’s say we created a simple Python API using Flask and will be focussing on a version based release rather than CI i.e. continuous integration where we would automagically release the latest stable.
Let’s start with stages we’ll be focussing on:
Build
Following Python PEP standards our setup.py
may look something like this:
import setuptools
with open("README.md", "r") as fh:
long_description = fh.read()
setuptools.setup(
name="example-pkg-your-username",
version="0.0.1",
author="Example Author",
author_email="author@example.com",
description="A small example package",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/pypi/sampleproject",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
python_requires='>=3.6',
)
And our .gitlab-ci.yml
will contain this:
build:
image: python:3.6
stage: build
cache:
paths:
- .cache/pip
script:
- pip install -r requirements.txt
- python setup.py build
Alternatively if we want to package it to a Docker image and push it to a registry the script above would be in the Dockerfile
and the build stage would look more like this;
build:
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY/$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY/$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
Test
Building on the previous step, this one expands and runs multiple jobs in the test stage.
Since our app is made for >= 3.6
we test multiple versions of Python to see if that actually is true.
In addition to the regular unittest we run coverage
as well to get a sense of how much code is actually tested.
This does NOT tell you anything about the quality of your unittests but can give an indication of what is tested and what not.
test:3.6:
image: python:3.6
stage: test
cache:
paths:
- .cache/pip
script:
- pip install -r requirements.txt
- pip install coverage
- python setup.py test
- coverage run -m unittest discover
- coverage report
test:3.7:
image: python:3.7
stage: test
cache:
paths:
- .cache/pip
script:
- pip install -r requirements.txt
- pip install coverage
- python setup.py test
- coverage run -m unittest discover
- coverage report
Staging and Production
The way I usually set these things up is by “binding” staging to the master git tree and tags as specific versions.
If we deploy to a PaaS like Heroku
we can follow the following as an example.
dpl
is a tool, written in Ruby, to deploy to Heroku.
staging:
stage: deploy
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=our-example-app-staging --api-key=$HEROKU_STAGING_API_KEY
only:
- master
production:
stage: deploy
when: manual
script:
- apt-get update -qy
- apt-get install -y ruby-dev
- gem install dpl
- dpl --provider=heroku --app=our-example-app-tags --api-key=$HEROKU_PRODUCTION_API_KEY
only:
- tags
In the yaml you’ll see $HEROKU_STAGING_API_KEY
and $HEROKU_PRODUCTION_API_KEY
coming along, these can/should be provided via Settings -> CI/CD -> Variables
.
To prevent the key from showing up in the build logs the entry should be marked as masked.
when: manual
will allow us to push it to production manually, this gives us a button within the pipeline to activate the job.
Docker and Kubernetes
The alternative to releasing to PaaS may be Docker/Kubernetes, to do this we first need to build the images.
We split this up into 3 parts, as before git master to latest, release tags to tags and the rest based on commit id.
Each job below has either an except
or only
to specify which job to run, all are in the newly created build-docker
.
stages:
- build
- test
- build-docker
- deploy
build-docker:
image: docker:latest
stage: build-docker
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY/$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY/$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
- docker push "$CI_REGISTRY/$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA"
except:
- master
- tags
build-docker:master:
image: docker:latest
stage: build-docker
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY/$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY/$CI_REGISTRY_IMAGE"
- docker push "$CI_REGISTRY/$CI_REGISTRY_IMAGE:latest"
- docker push "$CI_REGISTRY/$CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA"
only:
- master
build-docker:tags:
image: docker:latest
stage: build-docker
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY/$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY/$CI_REGISTRY_IMAGE:$CI_COMMIT_TAG"
only:
- tags
staging:
image: bitnami/kubectl:latest
stage: deploy
script:
- kubectl config set-cluster k8s --server="https://kubernetes.example.svc"
- kubectl config set-credentials gitlab --token="${USER_TOKEN}"
- kubectl config use-context default
- |
kubectl set image deployment my-app \
web=${CI_REGISTRY}/user/repo/${CI_PROJECT_PATH_SLUG}:${CI_COMMIT_SHORT_SHA} \
--kubeconfig ./config --namespace staging
environment:
name: staging
url: https://staging.example.com
only:
refs:
- master
kubernetes: active
production:
image: bitnami/kubectl:latest
stage: deploy
when: manual
script:
- kubectl config set-cluster k8s --server="https://kubernetes.example.svc"
- kubectl config set-credentials gitlab --token="${USER_TOKEN}"
- kubectl config use-context default
- |
kubectl set image deployment my-app \
web=${CI_REGISTRY}/${CI_PROJECT_PATH_SLUG}:${CI_COMMIT_TAG} \
--kubeconfig ./config --namespace default
environment:
name: production
url: https://www.example.com
only:
refs:
- tags
kubernetes: active
Cleanup
The cleanup may look something like below, it’s usually used to delete old versions or stop certain staging or other dynamic environments.
cleanup:
stage: cleanup
variables:
GIT_STRATEGY: none
environment:
name: ${CI_COMMIT_REF_NAME}
action: stop
when: manual
allow_failure: true
only:
refs:
- branches
kubernetes: active
except:
- master