Achieving CI/CD with Kubernetes

By Ramit Surana

8 minute read time

Hola, amigos. (In English – Hello, friends) Hope you are having a jolly good day. Continuous integration/delivery (CI/CD) is best said in terms of Martin Fowler, according to him, it can be defined as, "Continuous integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly."

In this article, we will discuss and explore two amazing and interesting pieces of technology. One, i.e. Jenkins, a popular Continuous Integration/Deployment tool, and second, i.e. Kubernetes, is a popular orchestration engine for containers. As an added bonus, we will also discover fabric8, an awesome tool for microservices platforms. Let's get started.

WARNING: Your machine may hang several times while performing the below steps. Choose a PC with high configuration.

Methodology

There are many methodologies using which we can achieve CI/CD for the on our machine.

Currently, in this article, we focus on:

Overview of Architecture

Before starting our work, let's take a moment and analyze the workflow required to start using Kubernetes containers with Jenkins. Kubernetes is an amazing orchestration engine for containers developed an amazing open source community. The fact that Google started Kubernetes gives Kubernetes an amazing advantage to use multiple open source container projects. By default, Docker is the one that is supported and used most with Kubernetes.

Similarly, while using rkt containers a.k.a rktnetes.

Currently, Jenkins does not support a plugin for rkt containers by jenkins. But I assume the workflow will remain the same after its due implementation.

Kubernetes-Jenkins Plugin

Setting up Kubernetes on Host Machine

Setting up Kubernetes on your host machine is an easy task. If you wish to try out on your local machine, I would recommend you try out minikube. Here is a quick follow up guide to get you started with setting up minikube on your local machine:

# Ensure Installation of kubectl first

# Visit for http://kubernetes.io/docs/getting-started-guides/binary_release/

# For downloading any prerequisites

# Visit https://github.com/kubernetes/minikube/blob/master/DRIVERS.md

# Download and install Minikube                                                                                                                             curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.7.1/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

An amazing work in the direction of using Jenkins and Kubernetes has been done by carlossg. He has built an awesome Kubernetes plugin for Jenkins. Using this plugin, you can easily start using Jenkins with kubernetes directly. Also to provide users with easier options to configure. He has built a Jenkins image, which by default contains the Kubernetes plugin. This image is available at Docker Hub. In the next steps, we will fetch this image from DockerHub and create a volume /var/jenkins_home for storing all your Jenkins data.

One Problem

Although we are doing everything as planned, we will still run into a problem. You will notice that whenever you are about to restart your Jenkins container after closing it down, all your data is lost. Whatever you did, like creating jobs, installing plugins, etc., will be lost. This is one of the common problems with containers. Let's discuss it in depth.

A Word About Data Containers

Data is a tricky concept when it comes to containers. The containers are not a good example of keeping data secure and available all the time. In the past, there have been many incidents where containers were seen to leak data. There are many ways to deal with such a problem. One is to use docker volumes. Due to some reasons, I did not find it useful when used in terms of persistent storage. One of the ways I dealt with persistent storage is to create another container, called a data container, and use it as a source of storing data, instead of depending on only one image. Here's a simple figure on how we plan to use the data container to ensure reliability of our data.

Here are the steps below to start using the jenkins kubernetes image:

//Pulling down the jenkins-kubernetes image
$ docker pull csanchez/jenkins-kubernetes
//Created a container for containing jenkins data with the image name csanchez/jenkins-kubernetes
$ docker create --name jenkins-k8s csanchez/jenkins-kubernetes

The above command will create and save data in a container called jenkins-k8s, which will be used whenever we wish to further use the jenkins containers with a persistent volume.

//Running jenkins using another container containing data
$ docker run --volumes-from jenkins-k8s -p 8080:8080 -p 50000:50000 -v /var/jenkins_home csanchez/jenkins-kubernetes

Configuring Settings for Kubernetes Over Jenkins

Now the Jenkins is pre-configured with Kubernetes plugin. So let’s jump to the next step. Using the Jenkins GUI, go to Manage Jenkins -> Configure System -> Cloud -> Add a new Cloud –> Kubernetes

If you wish to use Jenkins slave, you can use the jnlp-slave image on Docker Hub. This is a simple image that is used to set up a slave node template for you.

To use Jenkins slave on the run. While creating a new job on Jenkins, do this under configure settings of your job.

Now just put the name of the label you put in Kubernetes pod template under the restrict section. Now save and apply the settings for your new job. When building this job, you should see the slave node now running after you have built up the job. That’s all folks. You are ready to go, you can now add more of your plugins as per your needs.

Fabric8

Fabric8 is an open source microservices platform based on Docker, Kubernetes and Jenkins. The Red Hat guys build it. The purpose of the project is to make it easy to create microservices, build, test and deploy them via Continuous Delivery pipelines, then run and manage them with Continuous Improvement and ChatOps.

Fabric8 automatically installs and configures the following things for you

To get started, first you need to install the command line tool for fabric8 i.e. gofabric8. You can do that by downloading gofabric8 from https://github.com/fabric8io/gofabric8/releases. Unzipping it and use

$ sudo cp /usr/local/bin/ gofabric8

You can check its installation by running `$ gofabric8’ on your terminal. Now run the following commands below,

$ gofabric8 deploy -y

Generating Secrets

$ gofabric8 secrets -y

Check for the status of pods using kubectl

$ kubectl get pods

It will take a while to get all the container images to pull down and getting started.So you can go out and drink coffee :) You can use kubectl describe pods to check if something fails.

You can checkout the status of your pods via opening the kubernetes dashboard in a browser:

http://192.168.99.100:30000

Similarly, you can open the fabric8 hawtio browser interface

From my analysis, here's a depiction of what happened when you ran the above commands. Below is a simple workflow diagram for the same.

Achieving CI/CD

Easier said than done, building Jenkins from source and integrating Kubernetes is part of the story. But achieving Continuous Delivery with your setup is another very different and complex part of the story. Here are some tips on using certain plugins that could help you achieve Continuous Delivery with Jenkins:

This is a core plugin built by the Jenkins community. This plugin ensures that you can easily integrate any orchestration engine with your environment with very less complexity. Currently, I believe this was started as different communities had started building different plugins for various engines, and they had to depend on the Jenkins UI part to do so. Using this plugin, users can now directly implement their project's entire build/test/deploy pipeline in a Jenkinsfile and store that alongside their code, treating their pipeline as another piece of code to be checked into source control.

  • GitHub Plugin

These days, most companies use GitHub as a SCM tool. In this case, I would recommend you use this plugin. This plugin helps you push the code from GitHub and analyze, test it over Jenkins. For authentication purposes, I would recommend you to look GitHub Oauth Plugin

  • Docker Plugin

For Docker guys, this is one of the most suitable plugins that helps you do almost everything with Docker. This plugin also helps you use docker containers as slaves. There are several other docker plugins that you can switch over with according to time and usage.

The AWS guys have introduced an awesome service called as pipeline. This particular service helps you achieve continuous delivery with your AWS setup. Currently, this plugin is under heavy development and might not be suitable for production environments. Also checkout AWS CodeCommit.

  • OpenStack

For the OpenStack users, this plugin is suitable to configure your OpenStack settings with your OpenStack environment.

  • Google Cloud Platform

The deployment manager is a service started by the Google Cloud platform. Using Deployment Manager, you can create flexible and declarative templates that can deploy various Cloud Platform services, such as Google Cloud Storage, Google Compute Engine, Google Cloud SQL, and leave it to Deployment Manager to manage the Cloud Platform resources defined in your templates as deployments. This is a new plugin. But I think it is worth a try if you want to automate and sort things out with Google Cloud Platform.

In the end, I hope you enjoyed reading this article. Please let me know your valuable thoughts in the comments section below. Regarding the blog post above, I have posted my slides below. Hope you had a fun time experimenting and have a lovely day.

Picture of Ramit Surana

Written by Ramit Surana

Tags