Kubernetes Overview
Kubernetes has emerged as a leading container orchestration platform, which is widely adopted by organizations for deploying and managing containerized applications. It has become an integral part of the DevOps ecosystem. Initially developed by Google and released as open-source in 2014, Kubernetes benefits from Google's 15-year experience of running containerized workloads and contributions from the open-source community. It draws inspiration from Google's internal cluster management system, Borg.
architecture of Kubernetes
A Kubernetes cluster consists of the control plane and a set of worker machines, called nodes.
The control plane is responsible for managing Kubernetes infrastructure. The control plane component includes:
The API Server acts as the primary gateway to the Kubernetes cluster and provides a set of APIs that users and other components can access.
etcd is a dependable, distributed, and fast key-value store that Kubernetes employs to store all cluster data.
The Scheduler is responsible for determining where to place new pods within the cluster.
The Controllers are accountable for ensuring that the current state of the cluster aligns with the desired state.
The following components operate on the worker nodes:
Kubelet, which runs on each worker node, is responsible for launching containers.
Kube-proxy is in charge of Kubernetes internal networking and service discovery.
The Container runtime is responsible for executing the containers.
Installing Kubeadm: How to Set Up a Kubernetes Cluster
Kubernetes is an open-source container orchestration system that automates the deployment, scaling, and management of containerized applications. Kubeadm is a tool that makes it easy to set up a Kubernetes cluster.
What is required:-
swap off
Docker install and running
master node 6443 port should be open in inbound security group
Before proceeding, several prerequisites:-
Two machines running Ubuntu 22.04 LTS, one for the master node and the other for the worker node.
The master node requires more vCPU and memory, so an instance type of T2.medium is recommended.
The worker node does not require as much vCPU and memory, so an instance type of T2.micro is sufficient.
Sudo privileges are required on both machines.
Step 1: Install the docker engine and Add repository and Install Kubernetes components (Both Server)
Run the following commands to update the system and install Docker:
sudo apt update -y
sudo apt install docker.io -y
sudo systemctl start docker
sudo systemctl enable docker
curl -fsSL "https://packages.cloud.google.com/apt/doc/apt-key.gpg" | sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-archive-keyring.gpg
echo 'deb https://packages.cloud.google.com/apt kubernetes-xenial main' > /etc/apt/sources.list.d/kubernetes.list
apt update -y
apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
##To connect with cluster execute below commands on master node
--- Master Node ------------
sudo su
kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#for print token
kubeadm token create --print-join-command
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
Configure the Worker Node The final step is to configure the worker node. Run the following commands on the worker node:
sudo su
kubeadm reset pre-flight checks
-----> Paste the Join command on worker node and append `--v=5` at end
Thanks for reading.....