k8s – Deployment Strategy

First, let us address the underlying question:- What can Kubernetes offer compared to a basic development workflow?With Kubernetes explained, the developer can write some codes, send it, and get it working. It is also essential that the development environment be as alike as attainable to production (this is because having two different environments will introduce bugs.) In this blog, “Deploy code faster with Kubernetes ” we will walk you through Kubernetes quick start workflow built around: 

  • Kubernetes
  • Docker
  • Envoy/Ambassador.

What is Kubernetes?

Kubernetes is an open-source container management tool. An orchestration tool with container management responsibilities combining container deployment, scaling & descaling of containers & load balancing.Note: How Kubernetes works is not a containerization platform. It is a multi-container management solution.

Why Use Kubernetes?

Businesses today may be using Docker or Rocket or Linux containers for containerizing their applications on a massive scale. They use 10’s or 100’s of containers for load balancing traffic and ensuring high availability.As Container Management and orchestration tools, both Docker Swarm and Kubernetes are very popular. Although Docker Swarm might be popular as it runs on top of a Docker, when we have to choose between the two, Kubernetes is the undisputed market leader. This is partly because it is Google’s brainchild and partly because of its better functionality. Further one of the features Kubernetes has which Docker Swarm is missing is the auto-scaling of containers.

Kubernetes Features

Now that you know about Kubernetes, let’s look at its features:

1. Automatic Bin packing

Kubernetes naturally bundles your application and schedules the containers based on their prerequisites and accessible assets while not giving up accessibility. To guarantee total usage and save unused assets, Kubernetes balances among basic and best tasks at hand.

2. Service Discovery & Load balancing

With Kubernetes, there is no reason to stress over networking and communication. This is because Kubernetes automatically assigns IP addresses to containers and assigns a single DNS name for a set of containers that can load-balance traffic inside the cluster.

3. Storage Orchestration

Kubernetes allows you to mount the storage system that you prefer. You can either go for local storage or go for public cloud storage. For example: GCP / AWS or a shared network storage system like NFS, iSCSI, etc.

4. Self-Healing

Kubernetes automatically restarts the containers when there is any execution failure, kills the non-responding containers during health-checks. In scenarios, where the nodes die, the same is replaced and the failed containers are rescheduled on other available nodes.

5. Secret & Configuration Management

With Kubernetes you can deploy and update secrets. You can also configure applications without exposing secrets in your stack configuration and without rebuilding your image.

6. Batch Execution

Kubernetes manages batch and CI workloads along with managing services. With this, you can replace desired containers that fail.

7. Horizontal Scaling

While you are using CLI, to scale up and scale down the containers, Kubernetes allows you to do the same with only 1 command. You can also do scaling with the Kubernetes Dashboard.

8. Automatic Rollbacks & Roll-outs

Kubernetes progressively rolls out changes and updates to your application or its configuration, by ensuring that not all instances are worked at the same instance. Even if something goes wrong, Kubernetes will roll back the change for you.These were some of the notable features of Kubernetes.Now let us move to best practices of Kubernetes that you can easily apply to your clusters.Though Kubernetes can be complex to manage and configure, it seamlessly automates container lifecycle management applications. In the next few sections, we will show you ten best practices of Kubernetes, and how easily you can apply them to your cluster.Kubernetes Best Practices

 1. Disallow root user

All processes in a container run as the root user (uid 0), by default. To prevent the potential compromise of container hosts, it is important to specify a non-root and least-privileged user ID when building the container image and make sure that all application containers run as a non-root user.

2. Disallow privileged containers

Privileged containers have unrestricted host access, where like other containers the host’s uid 0 is mapped to the container uid 0. If settings are not defined properly, a process can gain privileges from the parent. Application containers should not be allowed to execute in privileged mode. and privilege escalation should not be allowed.

3. Disallow adding new capabilities

Linux defines and fine-grains permissions by using its capabilities. Although you can add capabilities to excel to the level of kernel access to allow the particular behaviors, it is advisable not to. Kubernetes makes sure that the application pods do not add any new capabilities at runtime.

4. Disallow changes to kernel parameters

The Sysctl interface allows modifications to kernel parameters at runtime. In a Kubernetes pod these parameters can be specified as part of the configurations. Kernel parameter modifications can be used for exploits and adding new parameters should be restricted.

5. Disallow use of bind mounts (hostPath volumes)

Kubernetes pods can use host bind mounts (i.e. volumes  and directories mounted on the host) in containers. Using host resources can enable access of shared data or may allow privilege escalation. Along with using host volumes couples application pods to a specific host, the usage of bind mounts should not be allowed for application pods.

6. Access to the docker socket bind mount to be disallowed 

Since the access to the Docker daemon (on the node) is allowed by the docker socket bind mount, this can be utilized to escalate and manage containers outside. Due to the same, for application workloads, the docker socket should not be allowed.

7. Not to allow the use of ports and host networks

With Kubernetes, the interface of the container host network allows pods to share the host networking stack and allows any potential snooping of network traffic across pods. 

8. Keep the root filesystem in “read-only”

Since the container only needs to write on mounted volumes which can persist state, even if the container exists. The root filesystem in “read-only” is good to put in place an immutable infrastructure strategy. An immutable root filesystem can also prevent malicious binaries from writing to the host system.

9. Require pod resource requests and limits

Application workloads share cluster resources. Hence, it is important to manage resources assigned for each pod. Keeping requests and limits configured per pod and including CPU and memory resources are good practices.

10. Require liveness probe and readiness probe

Liveness and readiness probes help manage a pod’s life cycle during deployments, restarts, and upgrades. If these checks are not properly configured, pods may be terminated while initializing or may start receiving user requests before they are ready. 

Conclusion

One thing developers love about open-source technologies, like Kubernetes is the potential for fast-paced innovation. It wasn’t until a few years ago that developers and IT operations folks had to readjust their practices to adopt to containers—and now, they have to adopt container orchestration, as well. Enterprises hoping to adopt Kubernetes need to hire professionals who can code, as well as knowing how to manage operations and understand application architecture, storage, and data workflows.

Advertisement

k8s – Liveness and Readiness Probes

Image result for kubernetes health check

Liveness vs Readiness Probes

kubelet

 
Image for post

liveness

readiness


Kubernetes

 
Image for post
readinessProbe:
httpGet:
path: /health/ready
port: 3000
livenessProbe:
httpGet:
path: /health/alive
port: 3000
 
Image for post

Elixir

Forwarding

defmodule PlugForward do
use Plug.Router

plug(:match)
plug(:dispatch)

forward(
“/health/live”,
to: Liveness
)
end

Mounting

defmodule PlainPlug do
use Plug.Router

plug(Liveness)

# regular paths defined here
end

Configuration

config :healthchex,
liveness_path: “/health/live”,
liveness_response: “OK”

defmodule Healthchex.Probes.Liveness do
import Plug.Conn

@default_path Application.get_env(:healthchex, :liveness_path, “/health/live”)
@default_resp Application.get_env(:healthchex, :liveness_response, “OK”)

def init(opts) do
%{
path: Keyword.get(opts, :path, @default_path),
resp: Keyword.get(opts, :resp, @default_resp)
}
end

def call(conn, _opts), do: conn
end

Response

defmodule Healthchex.Probes.Liveness do
# …

def call(%Plug.Conn{request_path: path} = conn, %{path: path, resp: resp}) do
conn
|> send_resp(200, resp)
|> halt()
end

def call(conn, _opts), do: conn
end


KamilLelonek/healthchex

A set of Plugs to be used for Kubernetes health-checks KamilLelonek/healthchex

 

Summary

 
Image for post

Liveness and Readiness Probes – The Theory

On each node of a Kubernetes cluster there is a Kubelet running which manages the pods on that particular node. Its responsible for getting images pulled down to the node, reporting the node’s health, and restarting failed containers. But how does the Kubelet know if there is a failed container?

Well, it can use the notion of probes to check on the status of a container. Specifically a liveness probe.

Liveness probes indicate if a container is running. Meaning, has the application within the container started running and is it still running? If you’ve configured liveness probes for your containers, you’ve probably still seen them in action. When a container gets restarted, it’s generally because of a liveness probe failing. This can happen if your container couldn’t startup, or if the application within the container crashed. The Kubelet will restart the container because the liveness probe is failing in those circumstances. In some circumstances though, the application within the container is not working, but hasn’t crashed. In that case, the container won’t restart unless you provide additional information as a liveness probe.

A readiness probe indicates if the application running inside the container is “ready” to serve requests. As an example, assume you have an application that starts but needs to check on other services like a backend database before finishing its configuration. Or an application that needs to download some data before it’s ready to handle requests. A readiness probe tells the Kubelet that the application can now perform its function and that the Kubelet can start sending it traffic.

There are three different ways these probes can be checked.

  • ExecAction: Execute a command within the container
  • TCPSocketAction: TCP check against the container’s IP/port
  • HTTPGetAction: An HTTP Get request against the container’s IP/Port

Let’s look at the two probes in the context of a container starting up. The diagram below shows several states of the same container over time. We have a view into the containers to see whats going on with the application with relationship to the probes.

On the left side, the pod has just been deployed. A liveness probe performed at TCPSocketAction and found that the pod is “alive” even though the application is still doing work (loading data, etc) and isn’t ready yet. As time moves on, the application finishes its startup routine and is now “ready” to serve incoming traffic.

Let’s take a look at this from a different perspective. Assume we have a deployment already in our cluster, and it consists of a single replica which is displayed on the right side, behind our service. Its likely that we’ll need to scale the app, or replace it with another version. Now that we know our app isn’t ready to handle traffic right away after being started, we can wait to have our service add the new app to the list of endpoints until the application is “ready”. This is an important thing to consider if your apps aren’t ready as soon as the container starts up. A request could be sent to the container before its able to handle the request.

Liveness and Readiness Probes – In Action

First, we’ll look to see what happens with a readiness check. For this example, I’ve got a very simple Apache container that displays pretty elaborate website. I’ve created a yaml manifest to deploy the container, service, and ingress rule.

apiVersion: v1
kind: Pod
metadata:
labels:
app: liveness
name: liveness-http
spec:
containers:
name: liveness
image: theithollow/hollowapp-blog:liveness
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 3
periodSeconds: 3
apiVersion: v1
kind: Service
metadata:
name: liveness
spec:
selector:
app: liveness
ports:
- protocol: TCP
port: 80
targetPort: 80
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: liveness-ingress
namespace: default
spec:
rules:
host: liveness.theithollowlab.com http: paths: backend:
serviceName: liveness
servicePort: 80

This manifest includes two probes:

  1. Liveness check doing an HTTP request against “/”
  2. Readiness check doing an HTTP request against /health
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 3
periodSeconds: 3

My container uses a script to start the HTTP daemon right away, and then waits 60 seconds before creating a /health page. This is to simulate some work being done by the application and the app isn’t ready for consumption. This is the entire website for reference.

And here is my container script.

/usr/sbin/httpd > /dev/null 2>&1 &. #Start HTTP Daemon
sleep 60. #wait 60 seconds
echo HealthStatus > /var/www/html/health #Create Health status page
sleep 3600

Deploy the manifest through kubectl apply. Once deployed, I’ve run a --watch command to keep an eye on the deployment. Here’s what it looked like.

You’ll notice that the ready status showed 0/1 for about 60 seconds. Meaning that my container was not in a ready status for 60 seconds until the /health page became available through the startup script.

As a silly example, what if we modified our liveness probe to look for /health? Perhaps we have an application that sometimes stops working, but doesn’t crash. Will the application ever startup? Here’s my new probe in the yaml manifest.

livenessProbe:      
  httpGet:
    path: /health
    port: 80
  initialDelaySeconds: 3
  periodSeconds: 3

After deploying this, let’s run another --watch on the pods. Here we see that the pod is restarting, and I am unable to ever access the /health page because it restarts before its ready.

We can see that the liveness probe is failing if we run a describe on the pod.

k8s – Concepts & Components (from kubernetes.io)

Master Components

Master components provide the cluster’s control plane. Master components make global decisions about the cluster (for example, scheduling), and detecting and responding to cluster events (starting up a new pod when a replication controller’s ‘replicas’ field is unsatisfied).

Master components can be run on any machine in the cluster. However, for simplicity, set up scripts typically start all master components on the same machine, and do not run user containers on this machine. See Building High-Availability Clusters for an example multi-master-VM setup.

kube-apiserver

Component on the master that exposes the Kubernetes API. It is the front-end for the Kubernetes control plane.

It is designed to scale horizontally – that is, it scales by deploying more instances. See Building High-Availability Clusters.

etcd

Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.

Always have a backup plan for etcd’s data for your Kubernetes cluster. For in-depth information on etcd, see etcd documentation.

kube-scheduler

Component on the master that watches newly created pods that have no node assigned, and selects a node for them to run on.

Factors taken into account for scheduling decisions include individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference and deadlines.

kube-controller-manager

Component on the master that runs controllers.

Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

These controllers include:

  • Node Controller: Responsible for noticing and responding when nodes go down.
  • Replication Controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
  • Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods).
  • Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces.

cloud-controller-manager

cloud-controller-manager runs controllers that interact with the underlying cloud providers. The cloud-controller-manager binary is an alpha feature introduced in Kubernetes release 1.6.

cloud-controller-manager runs cloud-provider-specific controller loops only. You must disable these controller loops in the kube-controller-manager. You can disable the controller loops by setting the --cloud-provider flag to external when starting the kube-controller-manager.

cloud-controller-manager allows cloud vendors code and the Kubernetes core to evolve independent of each other. In prior releases, the core Kubernetes code was dependent upon cloud-provider-specific code for functionality. In future releases, code specific to cloud vendors should be maintained by the cloud vendor themselves, and linked to cloud-controller-manager while running Kubernetes.

The following controllers have cloud provider dependencies:

  • Node Controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
  • Route Controller: For setting up routes in the underlying cloud infrastructure
  • Service Controller: For creating, updating and deleting cloud provider load balancers
  • Volume Controller: For creating, attaching, and mounting volumes, and interacting with the cloud provider to orchestrate volumes

Node Components

Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.

kubelet

An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.

The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.

kube-proxy

kube-proxy enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding.

Container Runtime

The container runtime is the software that is responsible for running containers. Kubernetes supports two runtimes: Docker and rkt.

Addons

Addons are pods and services that implement cluster features. The pods may be managed by Deployments, ReplicationControllers, and so on. Namespaced addon objects are created in the kube-system namespace.

Selected addons are described below, for an extended list of available addons please see Addons.

DNS

While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it.

Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.

Containers started by Kubernetes automatically include this DNS server in their DNS searches.

Web UI (Dashboard)

Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself.

Container Resource Monitoring

Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.

Cluster-level Logging

Cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.

K8s – Installation & Configuration

Hello Guys,

 

i know it is quite very difficult to install kubernetes in a proxy prone environment.

Therefore i decided to take the pain and install kubernetes in my proxy prone environment.

I Would Like to share my Steps

For Both Master and Worker Node :- 

vi .bashrc

# Set Proxyfunction setproxy()

{

export {http,https,ftp}_proxy=”http://<proxy_ip&gt;:<port>”

export no_proxy=”localhost,10.96.0.0/12,*.<company_domain_Name>,<internel_ip>”

}
# Unset Proxyfunction unsetproxy()

{

unset {http,https,ftp}_proxy}
function checkproxy()

{

env |grep proxy

}

vi /etc/yum.conf

proxy=http://<proxy_ip>:<port>

proxy=https://<proxy_ip>:<port>

vi /etc/hosts

<ip1-master>  kubernetes-1

<ip2-worker>  kubernetes-2

<ip3-worker>  kubernetes-3

 

mkdir -p /etc/systemd/system/docker.service.d/

 

vi /etc/systemd/system/docker.service.d/http-proxy.conf

 

[Service]

Environment=HTTP_PROXY=http://<proxy_ip>:<port>/

Environment=HTTPS_PROXY=https://<proxy_ip>:<port>/

Environment=NO_PROXY=<ip1-master>,<ip2-worker>,<ip3-worker>
cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetesbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

 

setenforce 0

 

yum install -y kubelet kubeadm kubectl

systemctl enable kubelet && systemctl start kubelet

 

sed -i “s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g” /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

 

systemctl daemon-reload

systemctl restart kubelet

 

export no_proxy=”localhost,10.96.0.0/12,*.<company domain>,

<ip1-master>,<ip2-worker>,<ip3-worker>”

 

export KUBECONFIG=/etc/kubernetes/admin.conf

 

calico recommended for amd64, Flannel is better but needs CIDR to be 10.244.0.0/24

kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

 

Master Node :-

kubeadm init

 

Worker Node :-

kubeadm join –token <token received from master node><master ip>:6443 –discovery-token-ca-cert-hash
sha256:<master-hash>

Master Node :-

Check in the master

kubectl get nodes

output-kuber

%d bloggers like this: