*Note: Join us live and online for the 2019 Nexus User Conference on June 12.
A few months ago, I gave a talk at Nexus User Conference 2018 on how to build a fully automated CI/CD platform on AWS using Terraform, Packer, and Ansible.
The session illustrated how concepts like infrastructure as code, immutable infrastructure, serverless, cluster discovery, etc. can be used to build a highly available and cost-effective pipeline.
The platform has a Jenkins cluster with a dedicated Jenkins master and workers inside an auto-scaling group. Each push event to the code repository will trigger the Jenkins master, which will schedule a new build on one of the available slave nodes.
The slave nodes will be responsible for running the unit and pre-integration tests, building the Docker image, storing the image to a private registry, and deploying a container based on that image to the Docker Swarm cluster. If you missed my talk, you can watch it again on YouTube - below.
In this post, I will walk through how to deploy the Jenkins cluster on AWS using the latest automation tools.
The cluster will be deployed into a VPC with two public and two private subnets across two availability zones. The stack will consist of an auto-scaling group of Jenkins workers in private subnets, and a private instance for the Jenkins master sitting behind an elastic load balancer.
To add or remove Jenkins workers on-demand, the CPU utilization of the ASG will be used to trigger a scale out (CPU > 80%) or scale in (CPU < 20%) event.
To get started, we will create 2 AMIs (Amazon Machine Image) for our instances. To do so, we will use Packer, which allows you to bake your own image.
The first AMI will be used to create the Jenkins master instance. The AMI uses the Amazon Linux Image as a base image, and for the provisioning part, it uses a simple shell script:
The shell script will be used to install the necessary dependencies, packages and security patches.
It will install the latest stable version of Jenkins and configure its settings:
Create a Jenkins admin user.
Create a SSH, GitHub and Docker registry credentials.
Install all needed plugins (Pipeline, Git plugin, Multi-branch Project, etc.).
Disable remote CLI, JNLP and unnecessary protocols.
Enable CSRF (Cross Site Request Forgery) protection.
Install Telegraf agent for collecting resource and Docker metrics.
The second AMI will be used to create the Jenkins workers. Similarly to the first AMI, it will use the Amazon Linux Image as a base image and a script to provision the instance.
A Jenkins worker requires the Java JDK environment and Git to be installed. In addition, the Docker community edition (building Docker images) and a data collector (monitoring) will be installed.
Now that our Packer template files are defined, issue the following commands to start baking the AMIs.
Packer will launch a temporary EC2 instance from the base image specified in the template file, and provision the instance with the given shell script. Finally, it will create an image from the instance. The following is an example of the output:
Sign in to AWS Management Console, navigate to "EC2 Dashboard" and click on "AMI," 2 new AMIs should be created as below:
Now that our AMIs are ready to use, let's deploy our Jenkins cluster to AWS. To achieve that, we will use an infrastructure as code tool called Terraform, which allows you to describe your entire infrastructure in templates files.
I have divided each component of my infrastructure into a template file. The following template file is responsible for creating an EC2 instance from the Jenkins master's AMI built earlier:
Another template file is used as a reference to each AMI built with Packer.
The Jenkins workers (aka slaves) will be inside an auto-scaling group of at least 3 instances. The instances will be created from a launch configuration based on the Jenkins slave's AMI.
To leverage the power of automation, we will automatically force the worker instance to join the cluster (cluster discovery) using Jenkins RESTful API.
At boot time, the user-data script above will be invoked, and the instance private IP address will be retrieved from the instance meta-data, and a groovy script will be executed to make the node join the cluster.
Moreover, to scale out and scale in instances on demand, I have defined 2 CloudWatch metric alarms based on the CPU utilization of the auto-scaling group.
Finally, an Elastic Load Balancer will be created in front of the Jenkins master's instance, and a new DNS record pointing to the ELB domain will be added to Route 53.
Once the stack is defined, provision the infrastructure with terraform apply command.
The command takes an additional parameter, a variables file with the AWS credentials and VPC settings.
Terraform will display an execution plan (list of resources created in advance), type yes to confirm, and the stack will be created in a few seconds:
Jump back to EC2 dashboards, a list of EC2 instances will be created:
In the terminal session, under the Outputs section, the Jenkins URL will be displayed:
Point your favorite browser to the URL displayed, the Jenkins login screen will be displayed. Sign in using the credentials provided while baking the Jenkins master's AMI:
If you click on "Credentials" from the navigation pane, a set of credentials should be created out of the box:
The same goes for "Plugins," a list of needed packages will also be installed:
Once the Autoscaling group finished creating the EC2 instances, the instances will join the cluster automatically, as you can see in the following screenshot:
You should now be ready to create your own CI/CD pipeline.
You can take this further and build a dynamic dashboard in your favorite visualization tool like Grafana to monitor your cluster resource usage based on the metrics collected by the agent installed on each EC2 instance:
This article originally appeared on A Cloud Guru and is republished here with permission from the author.
About the Author Mohamed Labouardy:
Mohamed is a Senior Software Engineer/DevOps - 3x AWS Certified - Scrum Master Certified - #Containers #Serverless #Gopher #Alexa #NLP #DistributedSystems #Android - Blogger & writer at Medium, DZone, Hackernoon & A Cloud Guru - Open Source Contributor (DialogFlow, Jenkins, Docker, Nexus, Telegraf...). - Author of multiple Open Source projects (Komiser, Nexus CLI, Butler, Swaggymnia...). You can connect directly with him on Twitter @mlabouardy.