Concepts :- The 4 Stages of the CI/CD Pipeline

Consider a use case in which an organization is building a product which has multiple remote teams working on different micro-services. Each team has their own service roadmap and delivery plan. For the integration to lead to a robust delivery, a CI/CD pipeline is a must. With a CI/CD pipeline, whenever the development team makes a change in the code, the automation in the pipeline leads to automated compilation, building, and deployment on various environments.

The CI/CD pipeline can be broken down into four stages of an application life cycle. Each stage will play a key role in CI/CD. The available tools will help achieve consistency, speed, and quality of delivery. As shown below, the four stages are Source, Build, Testing, and Deployment.

1. Source

An organization stores application codes in a centralized repository system to support versioning, tracking changes, collaborating, and auditing, as well as to ensure security and maintain control of the source code. A key element in this stage is the automation of the version control. We can consider it the baseline unit of the CI phase. Automation involves monitoring the version control system for any changes and triggering events, such as code compilation and testing.

Let’s say you are working on a Git repository along with other team members. Whenever a change is committed to the repository, the Git web-hook triggers a notification for the Jenkins job that compiles the code and performs unit test cases. If there is a failed compilation or a failed test case, an email is automatically sent to the entire developer group.

2. Build

This is the key stage of application development, and when completely automated, it allows the dev team to test and build their release multiple times a day. This stage includes compilation, packaging, and running the automated tests. Build automation utilities such as Make, Rake, Ant, Maven, and Gradle are used to generate build artifacts.

The build artifacts can then be stored into an artifact repository and deployed to the environments. Artifact repository solutions such as JFrog Artifact are used to store and manage the build artifacts. The main advantage of using an artifact repository is that they make it possible to revert back to a previous version of the build, if that’s ever necessary. Highly available cloud storage services such as AWS S3 can also be used to store and manage build artifacts. Running on AWS for build services, one should consider AWS Code-build. Jenkins, one of the most popular open-source tools, can be used to coordinate the build process.

3. Testing

Automated tests play an important role in any application development-deployment cycle. The automated tests required can be broken down into three separate categories:

  • Unit test: Developers subdivide an application into small units of code and perform testing. This test should be part of the build process and can be automated with tools like J Unit.
  • Integration test: In the world of micro-services of distributed applications, it is important that separate components should work when different modules of an application are integrated. This stage may involve testing of APIs, integration with a database, or other services. This test is generally part of the deployment and release process.
  • Functional test: This is end-to-end testing of application or product, generally performed on staging servers as part of release process. It can be automated with tools like Selenium to efficiently run across different web browsers.

To perform a streamlined test, framework tools such as J Meter and Selenium can easily be integrated with Jenkins to automate functional testing as part of end-to-end testing.

4. Deployment

Once the build is done and the automated tests for the release candidate completed, the last stage of the pipeline is auto deployment of the new code to the next environment in the pipe. In this stage, the tested code is deployed to staging or a production environment. If the new release passes all the tests at each stage, it can be moved to the production environment. There are various deployment strategies such as blue-green deployment, canary deployment, and in-place deployment for deploying on a production environment:

  • In a blue-green deployment, there will be multiple production environments running in parallel. The new “green” version of the application or service is provisioned in parallel with the last stable “blue” version running on a separate infrastructure. Once your green is tested and ready, the blue can be de-provisioned.
  • In a canary deploymentthe deployment is done to fewer nodes first and, after it is tested using those nodes, it will be deployed to all the nodes.
  • In-place deployment deploys the code directly to all the live nodes and might incur downtime; however, with rolling updates, you can reduce or remove downtime.

In addition, with the deployment, there are three key automation elements that need to be considered:

  • Cloud infrastructure provisioning: Infrastructure management tools like AWS CloudFormation, AWS OpsWorks Stacks and Terra form, help in creating templates and cloning the entire underlying application stack including compute, storage, and network from one environment to another in a matter of a few clicks or API calls.
  • Configuration management automation using tools such as Chef, Puppet and AWS OpsWorks can ensure configuration files are in the desired state, including OS-level configuration parameters (file permissions, environment variables, etc.). Over the years these also evolved to automate the whole flow, including code flows and deployment as well as resource orchestration on the infrastructure level.
  • Containerization and orchestration: Gone are the days when people have to wait for the server bootstrapping to deploy new versions of code. Containers such as Docker are used to package and scale specific services and include everything that is required to run it. The service packaging can support the isolation required between staging and production and, together with orchestration tools such as Kubernetes, can help automate deployment while reducing risks when moving code across the different environments.

Now that we have covered all stages of the pipeline, let’s looks at a real-life practical use case.

A Pipeline Use Case: 5 Steps

The diagram below represents a build pipeline for deploying a new version of code. In this use case, the processes of CI and CD are implemented using tools such as Jenkins, SCM tools (GIT/SVN, for example), JFrog artifact, and AWS Code-deploy.

The pipeline is triggered as soon as someone commits to the repository. Following are the steps in the pipeline:

  1. Commit. Once a developer commits the code, the Jenkins server polls the SCM tool (GIT/SVN) to get the latest code.
  2. The new code triggers a new Jenkins job that runs unit tests. The automated unit tests are run using build tools such as Maven or Gradle. The results of these unit tests are monitored. If the unit tests fail, an email is sent to the developer who broke the code and the job exits.
  3. If the test passes, then the next step is to compile the code. Jenkins is used as an integration tool that packages the application to pip (python) / war (java) file.
  4. Ansible is then used to create infrastructure and deploy any required configuration packages. JFrog Artifactory is used to store the build packages that will be used as a single binary package for deployments on other different environments.
  5. Using CodeDeploy, the new release is deployed on the servers. The servers might require deregistration from ELB while code deployment is taking place. The cloud infrastructure or configuration management tools help automate this including install packages and starting different services such as web/app server, DB server, etc.

Conclusion: Automate Everything

New innovations running on the cloud don’t require hefty amounts of resources and every small startup can evolve to be the next disrupting force in its market. This leads to a highly competitive landscape in almost every industry and directly creates the need for speed. Time is of the essence and rapid software delivery is the key.

Whether pushing the commit to deploy new versions or reverting back after a failure, automated processes save time, maintain quality, keep consistency, and allow R&D teams to speed and predict their delivery.

Advertisement

Function-as-a-Service? Serverless Architectures

It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service.

What is Function-as-a-Service (FaaS)?

FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual “function”, action, or piece of business logic. They are expected to start within milliseconds and process individual requests and then the process ends.

Principles of FaaS:

  • Complete abstraction of servers away from the developer
  • Billing based on consumption and executions, not server instance sizes
  • Services that are event-driven and instantaneously scalable

Timeline of moving to FaaS

At the basic level, you could describe them as a way to run some code when a “thing” happens. Here is a simple example below from Azure Functions. Shows how easy it is to process an HTTP request as a “Function”.

using System.Net;

public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, TraceWriter log)
{
    log.Info("C# HTTP trigger function processed a request.");

    // Get request body
    dynamic data = await req.Content.ReadAsAsync<object>();

    return req.CreateResponse(HttpStatusCode.OK, "Hello " + data.name);
}

Benefits & Use Cases

Like most things, not every app is a good fit for FaaS.

We have been looking to use them at Stackify primarily for our very high volume transactions. We have some transactions that happen hundreds of times per second. We see a lot of value in isolating that logic to a function that we can scale.

  • Super high volume transactions – Isolate them and scale them
  • Dynamic or burstable workloads – If you only run something once a day or month, no need to pay for a server 24/7/365
  • Scheduled tasks – They are a perfect way to run a certain piece of code on a schedule

Function-as-a-Service Features

Types of Functions

There are a lot of potential uses for functions. Below is a simple list of some common scenarios. Support and implementation for them varies by provider.

  • Scheduled tasks or jobs
  • Process a web request
  • Process queue messages
  • Run manually

These functions could also be chained together. For example, a web request could write to a queue, which is then picked up by a different function.

FaaS Providers

AWS, Microsoft Azure, and Google Cloud all provide a solution.  A lot of innovation is still going on in this area and things are rapidly improving and changing.  Read our article on how AWS, Azure, and Google Cloud compare to determine which cloud best meets your needs.

  • AWS Lambda
  • Azure Functions
  • Cloud Functions
  • Iron.io
  • Webtask.io

Monitoring Challenges

One of the big challenges is monitoring your function apps. You still need to understand how often they occur, how long they take, and potentially, why they are slow.

Since you don’t necessarily have a server or control the resources they are running on, you can’t install any monitoring software.

How we monitor these new types of apps is going to continue to evolve.

Comparing FaaS vs PaaS

Platform-as-a-Service greatly simplifies deploying applications. It allows us to deploy our app and the “cloud” worries about how to deploy the servers to run it. Most PaaS hosting options can even auto-scale the number of servers to handle workloads and save you money during times of low usage.

PaaS offerings like Azure App Services, AWS Elastic Beanstalk, and others, make it easy to deploy an entire application. They handle provisioning servers and deploying your application to the servers.

Function-as-a-Service (FaaS) provides the ability to deploy what is essentially a single function, or part of an application. FaaS is designed to potentially be a serverless architecture. Although, some providers, like Azure, also allow you to dedicate resources to a Function App.

When deployed as PaaS, an application is typically running on at least one server at all times. With FaaS, it may not be running at all until the function needs to be executed. It starts the function within a few milliseconds and then shuts it down.

Both provide the ability to easily deploy an application and scale it, without having to provision or configure servers.

 

My OpenFaaS Stack !!!!

%d bloggers like this: