PHP Nginx Redirection – Point few URI to old domain and redirect rest to new Domain

Lately, we had our entire website revamped and as a part of sanity, we redirected a 301 from our old url to new url, which was apparently in a new domain. But here came a challenge, there were a few URI’s we had provided to the the third party and the url they used pointed to the old domain. So we had to write exception rule in our nginx configuration file. After a lot of trial and errors, we figured a working solution for the same. I would like to share the generic version of the configuration file.

We wrote a rewrite rule for the 2 old URIs and redirected the rest of them to new domain successfully.

server {
listen 80 ;
server_name old.domain.com ;
root /var/www/old.domain.com_root;
index index.php index.html index.htm;
#try_files $uri $uri/ /index.php?$query_string;
location ~ \.php$ {
    expires off;  ## Do not cache dynamic content
    add_header Cache-Control "max-age=120, must-revalidate";
    gzip on;
    fastcgi_pass 127.0.0.1:9000;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    include fastcgi_params;
}


location /olduri-1 {
   #   rewrite ^/.* https://old.domain.com$request_uri permanent;
    try_files $uri $uri/ /index.php?$query_string;
}
location /olduri-2 {
   #   rewrite ^/.* https://old.domain.com$request_uri permanent;
    try_files $uri $uri/ /index.php?$query_string;
}
location / {
            return 301 https://new.domain.com$request_uri;
    }
}

Advertisement

Concepts :- The 4 Stages of the CI/CD Pipeline

Consider a use case in which an organization is building a product which has multiple remote teams working on different micro-services. Each team has their own service roadmap and delivery plan. For the integration to lead to a robust delivery, a CI/CD pipeline is a must. With a CI/CD pipeline, whenever the development team makes a change in the code, the automation in the pipeline leads to automated compilation, building, and deployment on various environments.

The CI/CD pipeline can be broken down into four stages of an application life cycle. Each stage will play a key role in CI/CD. The available tools will help achieve consistency, speed, and quality of delivery. As shown below, the four stages are Source, Build, Testing, and Deployment.

1. Source

An organization stores application codes in a centralized repository system to support versioning, tracking changes, collaborating, and auditing, as well as to ensure security and maintain control of the source code. A key element in this stage is the automation of the version control. We can consider it the baseline unit of the CI phase. Automation involves monitoring the version control system for any changes and triggering events, such as code compilation and testing.

Let’s say you are working on a Git repository along with other team members. Whenever a change is committed to the repository, the Git web-hook triggers a notification for the Jenkins job that compiles the code and performs unit test cases. If there is a failed compilation or a failed test case, an email is automatically sent to the entire developer group.

2. Build

This is the key stage of application development, and when completely automated, it allows the dev team to test and build their release multiple times a day. This stage includes compilation, packaging, and running the automated tests. Build automation utilities such as Make, Rake, Ant, Maven, and Gradle are used to generate build artifacts.

The build artifacts can then be stored into an artifact repository and deployed to the environments. Artifact repository solutions such as JFrog Artifact are used to store and manage the build artifacts. The main advantage of using an artifact repository is that they make it possible to revert back to a previous version of the build, if that’s ever necessary. Highly available cloud storage services such as AWS S3 can also be used to store and manage build artifacts. Running on AWS for build services, one should consider AWS Code-build. Jenkins, one of the most popular open-source tools, can be used to coordinate the build process.

3. Testing

Automated tests play an important role in any application development-deployment cycle. The automated tests required can be broken down into three separate categories:

  • Unit test: Developers subdivide an application into small units of code and perform testing. This test should be part of the build process and can be automated with tools like J Unit.
  • Integration test: In the world of micro-services of distributed applications, it is important that separate components should work when different modules of an application are integrated. This stage may involve testing of APIs, integration with a database, or other services. This test is generally part of the deployment and release process.
  • Functional test: This is end-to-end testing of application or product, generally performed on staging servers as part of release process. It can be automated with tools like Selenium to efficiently run across different web browsers.

To perform a streamlined test, framework tools such as J Meter and Selenium can easily be integrated with Jenkins to automate functional testing as part of end-to-end testing.

4. Deployment

Once the build is done and the automated tests for the release candidate completed, the last stage of the pipeline is auto deployment of the new code to the next environment in the pipe. In this stage, the tested code is deployed to staging or a production environment. If the new release passes all the tests at each stage, it can be moved to the production environment. There are various deployment strategies such as blue-green deployment, canary deployment, and in-place deployment for deploying on a production environment:

  • In a blue-green deployment, there will be multiple production environments running in parallel. The new “green” version of the application or service is provisioned in parallel with the last stable “blue” version running on a separate infrastructure. Once your green is tested and ready, the blue can be de-provisioned.
  • In a canary deploymentthe deployment is done to fewer nodes first and, after it is tested using those nodes, it will be deployed to all the nodes.
  • In-place deployment deploys the code directly to all the live nodes and might incur downtime; however, with rolling updates, you can reduce or remove downtime.

In addition, with the deployment, there are three key automation elements that need to be considered:

  • Cloud infrastructure provisioning: Infrastructure management tools like AWS CloudFormation, AWS OpsWorks Stacks and Terra form, help in creating templates and cloning the entire underlying application stack including compute, storage, and network from one environment to another in a matter of a few clicks or API calls.
  • Configuration management automation using tools such as Chef, Puppet and AWS OpsWorks can ensure configuration files are in the desired state, including OS-level configuration parameters (file permissions, environment variables, etc.). Over the years these also evolved to automate the whole flow, including code flows and deployment as well as resource orchestration on the infrastructure level.
  • Containerization and orchestration: Gone are the days when people have to wait for the server bootstrapping to deploy new versions of code. Containers such as Docker are used to package and scale specific services and include everything that is required to run it. The service packaging can support the isolation required between staging and production and, together with orchestration tools such as Kubernetes, can help automate deployment while reducing risks when moving code across the different environments.

Now that we have covered all stages of the pipeline, let’s looks at a real-life practical use case.

A Pipeline Use Case: 5 Steps

The diagram below represents a build pipeline for deploying a new version of code. In this use case, the processes of CI and CD are implemented using tools such as Jenkins, SCM tools (GIT/SVN, for example), JFrog artifact, and AWS Code-deploy.

The pipeline is triggered as soon as someone commits to the repository. Following are the steps in the pipeline:

  1. Commit. Once a developer commits the code, the Jenkins server polls the SCM tool (GIT/SVN) to get the latest code.
  2. The new code triggers a new Jenkins job that runs unit tests. The automated unit tests are run using build tools such as Maven or Gradle. The results of these unit tests are monitored. If the unit tests fail, an email is sent to the developer who broke the code and the job exits.
  3. If the test passes, then the next step is to compile the code. Jenkins is used as an integration tool that packages the application to pip (python) / war (java) file.
  4. Ansible is then used to create infrastructure and deploy any required configuration packages. JFrog Artifactory is used to store the build packages that will be used as a single binary package for deployments on other different environments.
  5. Using CodeDeploy, the new release is deployed on the servers. The servers might require deregistration from ELB while code deployment is taking place. The cloud infrastructure or configuration management tools help automate this including install packages and starting different services such as web/app server, DB server, etc.

Conclusion: Automate Everything

New innovations running on the cloud don’t require hefty amounts of resources and every small startup can evolve to be the next disrupting force in its market. This leads to a highly competitive landscape in almost every industry and directly creates the need for speed. Time is of the essence and rapid software delivery is the key.

Whether pushing the commit to deploy new versions or reverting back after a failure, automated processes save time, maintain quality, keep consistency, and allow R&D teams to speed and predict their delivery.

k8s – Deployment Strategy

First, let us address the underlying question:- What can Kubernetes offer compared to a basic development workflow?With Kubernetes explained, the developer can write some codes, send it, and get it working. It is also essential that the development environment be as alike as attainable to production (this is because having two different environments will introduce bugs.) In this blog, “Deploy code faster with Kubernetes ” we will walk you through Kubernetes quick start workflow built around: 

  • Kubernetes
  • Docker
  • Envoy/Ambassador.

What is Kubernetes?

Kubernetes is an open-source container management tool. An orchestration tool with container management responsibilities combining container deployment, scaling & descaling of containers & load balancing.Note: How Kubernetes works is not a containerization platform. It is a multi-container management solution.

Why Use Kubernetes?

Businesses today may be using Docker or Rocket or Linux containers for containerizing their applications on a massive scale. They use 10’s or 100’s of containers for load balancing traffic and ensuring high availability.As Container Management and orchestration tools, both Docker Swarm and Kubernetes are very popular. Although Docker Swarm might be popular as it runs on top of a Docker, when we have to choose between the two, Kubernetes is the undisputed market leader. This is partly because it is Google’s brainchild and partly because of its better functionality. Further one of the features Kubernetes has which Docker Swarm is missing is the auto-scaling of containers.

Kubernetes Features

Now that you know about Kubernetes, let’s look at its features:

1. Automatic Bin packing

Kubernetes naturally bundles your application and schedules the containers based on their prerequisites and accessible assets while not giving up accessibility. To guarantee total usage and save unused assets, Kubernetes balances among basic and best tasks at hand.

2. Service Discovery & Load balancing

With Kubernetes, there is no reason to stress over networking and communication. This is because Kubernetes automatically assigns IP addresses to containers and assigns a single DNS name for a set of containers that can load-balance traffic inside the cluster.

3. Storage Orchestration

Kubernetes allows you to mount the storage system that you prefer. You can either go for local storage or go for public cloud storage. For example: GCP / AWS or a shared network storage system like NFS, iSCSI, etc.

4. Self-Healing

Kubernetes automatically restarts the containers when there is any execution failure, kills the non-responding containers during health-checks. In scenarios, where the nodes die, the same is replaced and the failed containers are rescheduled on other available nodes.

5. Secret & Configuration Management

With Kubernetes you can deploy and update secrets. You can also configure applications without exposing secrets in your stack configuration and without rebuilding your image.

6. Batch Execution

Kubernetes manages batch and CI workloads along with managing services. With this, you can replace desired containers that fail.

7. Horizontal Scaling

While you are using CLI, to scale up and scale down the containers, Kubernetes allows you to do the same with only 1 command. You can also do scaling with the Kubernetes Dashboard.

8. Automatic Rollbacks & Roll-outs

Kubernetes progressively rolls out changes and updates to your application or its configuration, by ensuring that not all instances are worked at the same instance. Even if something goes wrong, Kubernetes will roll back the change for you.These were some of the notable features of Kubernetes.Now let us move to best practices of Kubernetes that you can easily apply to your clusters.Though Kubernetes can be complex to manage and configure, it seamlessly automates container lifecycle management applications. In the next few sections, we will show you ten best practices of Kubernetes, and how easily you can apply them to your cluster.Kubernetes Best Practices

 1. Disallow root user

All processes in a container run as the root user (uid 0), by default. To prevent the potential compromise of container hosts, it is important to specify a non-root and least-privileged user ID when building the container image and make sure that all application containers run as a non-root user.

2. Disallow privileged containers

Privileged containers have unrestricted host access, where like other containers the host’s uid 0 is mapped to the container uid 0. If settings are not defined properly, a process can gain privileges from the parent. Application containers should not be allowed to execute in privileged mode. and privilege escalation should not be allowed.

3. Disallow adding new capabilities

Linux defines and fine-grains permissions by using its capabilities. Although you can add capabilities to excel to the level of kernel access to allow the particular behaviors, it is advisable not to. Kubernetes makes sure that the application pods do not add any new capabilities at runtime.

4. Disallow changes to kernel parameters

The Sysctl interface allows modifications to kernel parameters at runtime. In a Kubernetes pod these parameters can be specified as part of the configurations. Kernel parameter modifications can be used for exploits and adding new parameters should be restricted.

5. Disallow use of bind mounts (hostPath volumes)

Kubernetes pods can use host bind mounts (i.e. volumes  and directories mounted on the host) in containers. Using host resources can enable access of shared data or may allow privilege escalation. Along with using host volumes couples application pods to a specific host, the usage of bind mounts should not be allowed for application pods.

6. Access to the docker socket bind mount to be disallowed 

Since the access to the Docker daemon (on the node) is allowed by the docker socket bind mount, this can be utilized to escalate and manage containers outside. Due to the same, for application workloads, the docker socket should not be allowed.

7. Not to allow the use of ports and host networks

With Kubernetes, the interface of the container host network allows pods to share the host networking stack and allows any potential snooping of network traffic across pods. 

8. Keep the root filesystem in “read-only”

Since the container only needs to write on mounted volumes which can persist state, even if the container exists. The root filesystem in “read-only” is good to put in place an immutable infrastructure strategy. An immutable root filesystem can also prevent malicious binaries from writing to the host system.

9. Require pod resource requests and limits

Application workloads share cluster resources. Hence, it is important to manage resources assigned for each pod. Keeping requests and limits configured per pod and including CPU and memory resources are good practices.

10. Require liveness probe and readiness probe

Liveness and readiness probes help manage a pod’s life cycle during deployments, restarts, and upgrades. If these checks are not properly configured, pods may be terminated while initializing or may start receiving user requests before they are ready. 

Conclusion

One thing developers love about open-source technologies, like Kubernetes is the potential for fast-paced innovation. It wasn’t until a few years ago that developers and IT operations folks had to readjust their practices to adopt to containers—and now, they have to adopt container orchestration, as well. Enterprises hoping to adopt Kubernetes need to hire professionals who can code, as well as knowing how to manage operations and understand application architecture, storage, and data workflows.

Deployment Strategies – insights.

There are a variety of techniques to deploy new applications to production so choosing the right strategy is an important decision that needs to be made to leverage the impact of change on the consumer.

In this post, we are going to talk about the following strategies:

  • recreate: version A is terminated then version B is rolled out
  • ramped (also known as rolling-update or incremental): version B is slowly rolled out and replacing version A
  • blue/green: version B is released alongside version A, then the traffic is switched to version B
  • canary: version B is released to a subset of users, then proceed to a full rollout
  • a/b testing: version B is released to a subset of users under specific condition
  • shadow: version B receives real world traffic alongside version A and doesn’t impact the response

Let’s take a look at each strategy and see which strategy would fit best for a particular use case.

Recreate

The re-create strategy is a dummy deployment which consists of shutting down version A then deploying version B after version A is turned off. This technique implies downtime of the service that depends on both shutdown and boot duration of the application.

Pro:

  • easy to setup
  • application state entirely renewed

Cons:

  • high impact on the user, expect downtime that depends on both shutdown and boot duration of the application

Ramped (also known as rolling-update or incremental)

The ramped deployment strategy consists of slowly rolling out a version of an application by replacing instances one after the other until all the instances are rolled out. It usually follows the following process: with a pool of version A behind a load balancer, one instance of version B is deployed. When the service is ready to accept traffic, the instance is added to the pool. Then, one instance of version A is removed from the pool and shutdown.

Depending on the system taking care of the ramped deployment, you can tweak the following parameters to increase the deployment time:

  • parallelism, max batch size: number of concurrent instances to rollout
  • max surge: how many instances to add in addition of the current amount
  • max unavailable: number of unavailable instances during the rolling update procedure

Pro:

  • easy to setup
  • version is slowly released across instances
  • convenient for stateful applications that can handle re-balancing of the data

Cons:

  • rollout/rollback can take time
  • supporting multiple APIs is hard
  • no control over traffic

Blue/Green

The blue/green deployment strategy differs from a ramped deployment, version B (green) is deployed alongside version A (blue) with exactly the same amount of instances. After testing that the new version meets all the requirements the traffic is switched from version A to version B at the load balancer level.

Pro:

  • instant rollout/rollback
  • avoid versioning issue, the entire application state is changed in one go

Cons:

  • expensive as it requires double the resources
  • proper test of the entire platform should be done before releasing to production
  • handling stateful applications can be hard

Canary

A canary deployment consists of gradually shifting production traffic from version A to version B. Usually the traffic is split based on weight. For example, 90% of the requests go to version A, 10% go to version B.

This technique is mostly used when the tests are lacking or not reliable or if there is little confidence about the stability of the new release on the platform.

Pro:

  • version released for a subset of users
  • convenient for error rate and performance monitoring
  • fast rollback

Cons:

  • slow rollout

A/B testing

A/B testing deployments consists of routing a subset of users to a new functionality under specific conditions. It is usually a technique for making business decisions based on statistics, rather than a deployment strategy. However, it is related and can be implemented by adding extra functionality to a canary deployment so we will briefly discuss it here.
This technique is widely used to test conversion of a given feature and only rollout the version that converts the most.

Below is a list of conditions that can be used to distribute traffic among the versions:

  • cookie
  • query parameters
  • Geo-localisation
  • technology support: browser version, screen size, operating system, etc.
  • language

Pro:

  • several versions run in parallel
  • full control over the traffic distribution

Cons:

  • requires intelligent load balancer
  • hard to troubleshoot errors for a given session, distributed tracing becomes mandatory

Shadow

A shadow deployment consists of releasing version B alongside version A, fork version A’s incoming requests and send them to version B as well without impacting production traffic. This is particularly useful to test production load on a new feature. A rollout of the application is triggered when stability and performance meet the requirements.

This technique is fairly complex to setup and needs special requirements, especially with egress traffic. For example, given a shopping cart platform, if you want to shadow test the payment service you can end-up having customers paying twice for their order. In this case, you can solve it by creating a mocking service that replicates the response from the provider.

Pro:

  • performance testing of the application with production traffic
  • no impact on the user
  • no rollout until the stability and performance of the application meet the requirements.

Cons:

  • expensive as it requires double the resources
  • not a true user test and can be misleading
  • complex to setup
  • requires mocking service for certain cases

To sum up

There are multiple ways to deploy a new version of an application and it really depends on the needs and budget. When releasing to development/staging environments, a recreate or ramped deployment is usually a good choice. When it comes to production, a ramped or blue/green deployment is usually a good fit, but proper testing of the new platform is necessary.
Blue/green and shadow strategies have more impact on the budget as it requires double resource capacity. If the application lacks in tests or if there is little confidence about the impact/stability of the software, then a canary, a/ b testing or shadow release can be used. If your business requires testing of a new feature among a specific pool of users that can be filtered depending on some parameters like geolocation, language, operating system or browser features, then you may want to use the a/b testing technique.

Last but not least, a shadow release is complex and requires extra work to mock egress traffic which is mandatory when calling external dependencies with mutable actions (email, bank, etc.). However, this technique can be useful when migrating to a new database technology and use shadow traffic to monitor system performance under load.

Below is a diagram to help you choose the right strategy:

Kubernetes deployment strategies

k8s – Liveness and Readiness Probes

Image result for kubernetes health check

Liveness vs Readiness Probes

kubelet

 
Image for post

liveness

readiness


Kubernetes

 
Image for post
readinessProbe:
httpGet:
path: /health/ready
port: 3000
livenessProbe:
httpGet:
path: /health/alive
port: 3000
 
Image for post

Elixir

Forwarding

defmodule PlugForward do
use Plug.Router

plug(:match)
plug(:dispatch)

forward(
“/health/live”,
to: Liveness
)
end

Mounting

defmodule PlainPlug do
use Plug.Router

plug(Liveness)

# regular paths defined here
end

Configuration

config :healthchex,
liveness_path: “/health/live”,
liveness_response: “OK”

defmodule Healthchex.Probes.Liveness do
import Plug.Conn

@default_path Application.get_env(:healthchex, :liveness_path, “/health/live”)
@default_resp Application.get_env(:healthchex, :liveness_response, “OK”)

def init(opts) do
%{
path: Keyword.get(opts, :path, @default_path),
resp: Keyword.get(opts, :resp, @default_resp)
}
end

def call(conn, _opts), do: conn
end

Response

defmodule Healthchex.Probes.Liveness do
# …

def call(%Plug.Conn{request_path: path} = conn, %{path: path, resp: resp}) do
conn
|> send_resp(200, resp)
|> halt()
end

def call(conn, _opts), do: conn
end


KamilLelonek/healthchex

A set of Plugs to be used for Kubernetes health-checks KamilLelonek/healthchex

 

Summary

 
Image for post

Liveness and Readiness Probes – The Theory

On each node of a Kubernetes cluster there is a Kubelet running which manages the pods on that particular node. Its responsible for getting images pulled down to the node, reporting the node’s health, and restarting failed containers. But how does the Kubelet know if there is a failed container?

Well, it can use the notion of probes to check on the status of a container. Specifically a liveness probe.

Liveness probes indicate if a container is running. Meaning, has the application within the container started running and is it still running? If you’ve configured liveness probes for your containers, you’ve probably still seen them in action. When a container gets restarted, it’s generally because of a liveness probe failing. This can happen if your container couldn’t startup, or if the application within the container crashed. The Kubelet will restart the container because the liveness probe is failing in those circumstances. In some circumstances though, the application within the container is not working, but hasn’t crashed. In that case, the container won’t restart unless you provide additional information as a liveness probe.

A readiness probe indicates if the application running inside the container is “ready” to serve requests. As an example, assume you have an application that starts but needs to check on other services like a backend database before finishing its configuration. Or an application that needs to download some data before it’s ready to handle requests. A readiness probe tells the Kubelet that the application can now perform its function and that the Kubelet can start sending it traffic.

There are three different ways these probes can be checked.

  • ExecAction: Execute a command within the container
  • TCPSocketAction: TCP check against the container’s IP/port
  • HTTPGetAction: An HTTP Get request against the container’s IP/Port

Let’s look at the two probes in the context of a container starting up. The diagram below shows several states of the same container over time. We have a view into the containers to see whats going on with the application with relationship to the probes.

On the left side, the pod has just been deployed. A liveness probe performed at TCPSocketAction and found that the pod is “alive” even though the application is still doing work (loading data, etc) and isn’t ready yet. As time moves on, the application finishes its startup routine and is now “ready” to serve incoming traffic.

Let’s take a look at this from a different perspective. Assume we have a deployment already in our cluster, and it consists of a single replica which is displayed on the right side, behind our service. Its likely that we’ll need to scale the app, or replace it with another version. Now that we know our app isn’t ready to handle traffic right away after being started, we can wait to have our service add the new app to the list of endpoints until the application is “ready”. This is an important thing to consider if your apps aren’t ready as soon as the container starts up. A request could be sent to the container before its able to handle the request.

Liveness and Readiness Probes – In Action

First, we’ll look to see what happens with a readiness check. For this example, I’ve got a very simple Apache container that displays pretty elaborate website. I’ve created a yaml manifest to deploy the container, service, and ingress rule.

apiVersion: v1
kind: Pod
metadata:
labels:
app: liveness
name: liveness-http
spec:
containers:
name: liveness
image: theithollow/hollowapp-blog:liveness
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 3
periodSeconds: 3
apiVersion: v1
kind: Service
metadata:
name: liveness
spec:
selector:
app: liveness
ports:
- protocol: TCP
port: 80
targetPort: 80
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: liveness-ingress
namespace: default
spec:
rules:
host: liveness.theithollowlab.com http: paths: backend:
serviceName: liveness
servicePort: 80

This manifest includes two probes:

  1. Liveness check doing an HTTP request against “/”
  2. Readiness check doing an HTTP request against /health
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 3
periodSeconds: 3

My container uses a script to start the HTTP daemon right away, and then waits 60 seconds before creating a /health page. This is to simulate some work being done by the application and the app isn’t ready for consumption. This is the entire website for reference.

And here is my container script.

/usr/sbin/httpd > /dev/null 2>&1 &. #Start HTTP Daemon
sleep 60. #wait 60 seconds
echo HealthStatus > /var/www/html/health #Create Health status page
sleep 3600

Deploy the manifest through kubectl apply. Once deployed, I’ve run a --watch command to keep an eye on the deployment. Here’s what it looked like.

You’ll notice that the ready status showed 0/1 for about 60 seconds. Meaning that my container was not in a ready status for 60 seconds until the /health page became available through the startup script.

As a silly example, what if we modified our liveness probe to look for /health? Perhaps we have an application that sometimes stops working, but doesn’t crash. Will the application ever startup? Here’s my new probe in the yaml manifest.

livenessProbe:      
  httpGet:
    path: /health
    port: 80
  initialDelaySeconds: 3
  periodSeconds: 3

After deploying this, let’s run another --watch on the pods. Here we see that the pod is restarting, and I am unable to ever access the /health page because it restarts before its ready.

We can see that the liveness probe is failing if we run a describe on the pod.

%d bloggers like this: