k8s – Deployment Strategy

First, let us address the underlying question:- What can Kubernetes offer compared to a basic development workflow?With Kubernetes explained, the developer can write some codes, send it, and get it working. It is also essential that the development environment be as alike as attainable to production (this is because having two different environments will introduce bugs.) In this blog, “Deploy code faster with Kubernetes ” we will walk you through Kubernetes quick start workflow built around: 

  • Kubernetes
  • Docker
  • Envoy/Ambassador.

What is Kubernetes?

Kubernetes is an open-source container management tool. An orchestration tool with container management responsibilities combining container deployment, scaling & descaling of containers & load balancing.Note: How Kubernetes works is not a containerization platform. It is a multi-container management solution.

Why Use Kubernetes?

Businesses today may be using Docker or Rocket or Linux containers for containerizing their applications on a massive scale. They use 10’s or 100’s of containers for load balancing traffic and ensuring high availability.As Container Management and orchestration tools, both Docker Swarm and Kubernetes are very popular. Although Docker Swarm might be popular as it runs on top of a Docker, when we have to choose between the two, Kubernetes is the undisputed market leader. This is partly because it is Google’s brainchild and partly because of its better functionality. Further one of the features Kubernetes has which Docker Swarm is missing is the auto-scaling of containers.

Kubernetes Features

Now that you know about Kubernetes, let’s look at its features:

1. Automatic Bin packing

Kubernetes naturally bundles your application and schedules the containers based on their prerequisites and accessible assets while not giving up accessibility. To guarantee total usage and save unused assets, Kubernetes balances among basic and best tasks at hand.

2. Service Discovery & Load balancing

With Kubernetes, there is no reason to stress over networking and communication. This is because Kubernetes automatically assigns IP addresses to containers and assigns a single DNS name for a set of containers that can load-balance traffic inside the cluster.

3. Storage Orchestration

Kubernetes allows you to mount the storage system that you prefer. You can either go for local storage or go for public cloud storage. For example: GCP / AWS or a shared network storage system like NFS, iSCSI, etc.

4. Self-Healing

Kubernetes automatically restarts the containers when there is any execution failure, kills the non-responding containers during health-checks. In scenarios, where the nodes die, the same is replaced and the failed containers are rescheduled on other available nodes.

5. Secret & Configuration Management

With Kubernetes you can deploy and update secrets. You can also configure applications without exposing secrets in your stack configuration and without rebuilding your image.

6. Batch Execution

Kubernetes manages batch and CI workloads along with managing services. With this, you can replace desired containers that fail.

7. Horizontal Scaling

While you are using CLI, to scale up and scale down the containers, Kubernetes allows you to do the same with only 1 command. You can also do scaling with the Kubernetes Dashboard.

8. Automatic Rollbacks & Roll-outs

Kubernetes progressively rolls out changes and updates to your application or its configuration, by ensuring that not all instances are worked at the same instance. Even if something goes wrong, Kubernetes will roll back the change for you.These were some of the notable features of Kubernetes.Now let us move to best practices of Kubernetes that you can easily apply to your clusters.Though Kubernetes can be complex to manage and configure, it seamlessly automates container lifecycle management applications. In the next few sections, we will show you ten best practices of Kubernetes, and how easily you can apply them to your cluster.Kubernetes Best Practices

 1. Disallow root user

All processes in a container run as the root user (uid 0), by default. To prevent the potential compromise of container hosts, it is important to specify a non-root and least-privileged user ID when building the container image and make sure that all application containers run as a non-root user.

2. Disallow privileged containers

Privileged containers have unrestricted host access, where like other containers the host’s uid 0 is mapped to the container uid 0. If settings are not defined properly, a process can gain privileges from the parent. Application containers should not be allowed to execute in privileged mode. and privilege escalation should not be allowed.

3. Disallow adding new capabilities

Linux defines and fine-grains permissions by using its capabilities. Although you can add capabilities to excel to the level of kernel access to allow the particular behaviors, it is advisable not to. Kubernetes makes sure that the application pods do not add any new capabilities at runtime.

4. Disallow changes to kernel parameters

The Sysctl interface allows modifications to kernel parameters at runtime. In a Kubernetes pod these parameters can be specified as part of the configurations. Kernel parameter modifications can be used for exploits and adding new parameters should be restricted.

5. Disallow use of bind mounts (hostPath volumes)

Kubernetes pods can use host bind mounts (i.e. volumes  and directories mounted on the host) in containers. Using host resources can enable access of shared data or may allow privilege escalation. Along with using host volumes couples application pods to a specific host, the usage of bind mounts should not be allowed for application pods.

6. Access to the docker socket bind mount to be disallowed 

Since the access to the Docker daemon (on the node) is allowed by the docker socket bind mount, this can be utilized to escalate and manage containers outside. Due to the same, for application workloads, the docker socket should not be allowed.

7. Not to allow the use of ports and host networks

With Kubernetes, the interface of the container host network allows pods to share the host networking stack and allows any potential snooping of network traffic across pods. 

8. Keep the root filesystem in “read-only”

Since the container only needs to write on mounted volumes which can persist state, even if the container exists. The root filesystem in “read-only” is good to put in place an immutable infrastructure strategy. An immutable root filesystem can also prevent malicious binaries from writing to the host system.

9. Require pod resource requests and limits

Application workloads share cluster resources. Hence, it is important to manage resources assigned for each pod. Keeping requests and limits configured per pod and including CPU and memory resources are good practices.

10. Require liveness probe and readiness probe

Liveness and readiness probes help manage a pod’s life cycle during deployments, restarts, and upgrades. If these checks are not properly configured, pods may be terminated while initializing or may start receiving user requests before they are ready. 

Conclusion

One thing developers love about open-source technologies, like Kubernetes is the potential for fast-paced innovation. It wasn’t until a few years ago that developers and IT operations folks had to readjust their practices to adopt to containers—and now, they have to adopt container orchestration, as well. Enterprises hoping to adopt Kubernetes need to hire professionals who can code, as well as knowing how to manage operations and understand application architecture, storage, and data workflows.

Advertisement

k8s – Liveness and Readiness Probes

Image result for kubernetes health check

Liveness vs Readiness Probes

kubelet

 
Image for post

liveness

readiness


Kubernetes

 
Image for post
readinessProbe:
httpGet:
path: /health/ready
port: 3000
livenessProbe:
httpGet:
path: /health/alive
port: 3000
 
Image for post

Elixir

Forwarding

defmodule PlugForward do
use Plug.Router

plug(:match)
plug(:dispatch)

forward(
“/health/live”,
to: Liveness
)
end

Mounting

defmodule PlainPlug do
use Plug.Router

plug(Liveness)

# regular paths defined here
end

Configuration

config :healthchex,
liveness_path: “/health/live”,
liveness_response: “OK”

defmodule Healthchex.Probes.Liveness do
import Plug.Conn

@default_path Application.get_env(:healthchex, :liveness_path, “/health/live”)
@default_resp Application.get_env(:healthchex, :liveness_response, “OK”)

def init(opts) do
%{
path: Keyword.get(opts, :path, @default_path),
resp: Keyword.get(opts, :resp, @default_resp)
}
end

def call(conn, _opts), do: conn
end

Response

defmodule Healthchex.Probes.Liveness do
# …

def call(%Plug.Conn{request_path: path} = conn, %{path: path, resp: resp}) do
conn
|> send_resp(200, resp)
|> halt()
end

def call(conn, _opts), do: conn
end


KamilLelonek/healthchex

A set of Plugs to be used for Kubernetes health-checks KamilLelonek/healthchex

 

Summary

 
Image for post

Liveness and Readiness Probes – The Theory

On each node of a Kubernetes cluster there is a Kubelet running which manages the pods on that particular node. Its responsible for getting images pulled down to the node, reporting the node’s health, and restarting failed containers. But how does the Kubelet know if there is a failed container?

Well, it can use the notion of probes to check on the status of a container. Specifically a liveness probe.

Liveness probes indicate if a container is running. Meaning, has the application within the container started running and is it still running? If you’ve configured liveness probes for your containers, you’ve probably still seen them in action. When a container gets restarted, it’s generally because of a liveness probe failing. This can happen if your container couldn’t startup, or if the application within the container crashed. The Kubelet will restart the container because the liveness probe is failing in those circumstances. In some circumstances though, the application within the container is not working, but hasn’t crashed. In that case, the container won’t restart unless you provide additional information as a liveness probe.

A readiness probe indicates if the application running inside the container is “ready” to serve requests. As an example, assume you have an application that starts but needs to check on other services like a backend database before finishing its configuration. Or an application that needs to download some data before it’s ready to handle requests. A readiness probe tells the Kubelet that the application can now perform its function and that the Kubelet can start sending it traffic.

There are three different ways these probes can be checked.

  • ExecAction: Execute a command within the container
  • TCPSocketAction: TCP check against the container’s IP/port
  • HTTPGetAction: An HTTP Get request against the container’s IP/Port

Let’s look at the two probes in the context of a container starting up. The diagram below shows several states of the same container over time. We have a view into the containers to see whats going on with the application with relationship to the probes.

On the left side, the pod has just been deployed. A liveness probe performed at TCPSocketAction and found that the pod is “alive” even though the application is still doing work (loading data, etc) and isn’t ready yet. As time moves on, the application finishes its startup routine and is now “ready” to serve incoming traffic.

Let’s take a look at this from a different perspective. Assume we have a deployment already in our cluster, and it consists of a single replica which is displayed on the right side, behind our service. Its likely that we’ll need to scale the app, or replace it with another version. Now that we know our app isn’t ready to handle traffic right away after being started, we can wait to have our service add the new app to the list of endpoints until the application is “ready”. This is an important thing to consider if your apps aren’t ready as soon as the container starts up. A request could be sent to the container before its able to handle the request.

Liveness and Readiness Probes – In Action

First, we’ll look to see what happens with a readiness check. For this example, I’ve got a very simple Apache container that displays pretty elaborate website. I’ve created a yaml manifest to deploy the container, service, and ingress rule.

apiVersion: v1
kind: Pod
metadata:
labels:
app: liveness
name: liveness-http
spec:
containers:
name: liveness
image: theithollow/hollowapp-blog:liveness
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 3
periodSeconds: 3
apiVersion: v1
kind: Service
metadata:
name: liveness
spec:
selector:
app: liveness
ports:
- protocol: TCP
port: 80
targetPort: 80
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: liveness-ingress
namespace: default
spec:
rules:
host: liveness.theithollowlab.com http: paths: backend:
serviceName: liveness
servicePort: 80

This manifest includes two probes:

  1. Liveness check doing an HTTP request against “/”
  2. Readiness check doing an HTTP request against /health
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 3
periodSeconds: 3

My container uses a script to start the HTTP daemon right away, and then waits 60 seconds before creating a /health page. This is to simulate some work being done by the application and the app isn’t ready for consumption. This is the entire website for reference.

And here is my container script.

/usr/sbin/httpd > /dev/null 2>&1 &. #Start HTTP Daemon
sleep 60. #wait 60 seconds
echo HealthStatus > /var/www/html/health #Create Health status page
sleep 3600

Deploy the manifest through kubectl apply. Once deployed, I’ve run a --watch command to keep an eye on the deployment. Here’s what it looked like.

You’ll notice that the ready status showed 0/1 for about 60 seconds. Meaning that my container was not in a ready status for 60 seconds until the /health page became available through the startup script.

As a silly example, what if we modified our liveness probe to look for /health? Perhaps we have an application that sometimes stops working, but doesn’t crash. Will the application ever startup? Here’s my new probe in the yaml manifest.

livenessProbe:      
  httpGet:
    path: /health
    port: 80
  initialDelaySeconds: 3
  periodSeconds: 3

After deploying this, let’s run another --watch on the pods. Here we see that the pod is restarting, and I am unable to ever access the /health page because it restarts before its ready.

We can see that the liveness probe is failing if we run a describe on the pod.

%d bloggers like this: