Kubernetes Cluster Architecture Basics
Kubernetes technology is gaining more and more popularity among tech companies from year to year. In short, the Kubernetes cluster represents a number of abstract machines for auto-deployment, running, scaling, and administration of containerized applications. This open-source solution was intended to ease the infrastructure management, so streamline the development and app delivery processes in general. Let’s take a deeper dive into the basics of Kubernetes cluster architecture to get a clearer picture of its functioning and the potential benefits of its usage.
Kubernetes Cluster Architecture
Every single Kubernetes cluster consists of at least one master instance termed a control plane and several worker nodes, which run the cluster orchestration system. Head and worker nodes can represent virtual or physical computers or cloud instances.
The Control Plane is aimed to supervise worker instances and to run the Kubernetes API Server, Scheduler and all the major controllers. All the administrative aspects, such as the interaction between cluster elements, cluster state persistence and workload scheduling are managed in the master node. Speaking about production environments the head node runs through a number of machines and a cluster runs a number of nodes ensuring stability and high availability.
Interactions between admin and cluster occur through Kubernetes API requests. They are administrated by the master node and then validated and handled by the Kubernetes API Server. The API server also carries out all the internal processes in the cluster, it tracks the state of all components and manages the exchange between them.
Scheduler monitors the Kubernetes API Server for newly created tasks and pods and organizes user pods to worker nods taking into account resource requirements, data location, hardware and software policies, affinity specifications, deadlines etc.
A key-value store is a part of the control node, that is responsible for the storage of all the Kubernetes cluster data.
The Controller Manager monitors the current state of controllers through the Kubernetes API server, compares it to the desired state, and ensures they coincide. It also runs the processes of Node, Job, Endpoints, Service Account, and Token controllers.
The Cloud Controller Manager is required for integration with cloud technologies in case the Kubernetes cluster is running in the cloud. It runs the controllers which are specific to the cloud service provider, among them: node, route, and service controllers.
The worker node main agents and elements provide the Kubernetes cluster runtime environment.
Each node in the cluster has Kubelet running, which performs the role of pipeline between API server and the node, getting the tasks from the server and ensuring the containers are running in the User Pod and performing properly.
Kube-proxy is a very important network component (running on every single node as well as Kubelet) that manages IP translation and routing. It monitors and controls compliance with the network rules, allowing internal or external network cluster interaction with User Pods.
The Container Runtime engine runs or stops the containers is in each worker node. This might be Docker, CRI-O, containerd or any Kubernetes Container Runtime Interface.
- Infrastructure to work on: physical or virtual machines, private, public or hybrid clouds etc.
- Container Registry for storing the container images that the Kubernetes cluster relies on
- Storage for application data attached to the cluster
Kubernetes Cluster Deployment to the Cloud
Considering the quite complex architecture of the Kubernetes Cluster it might seem a very hard task to deploy, manage and keep it up to date requiring a separate department. With Hidora Cloud Platform you get the Kubernetes Cluster solution available for automatic installation just in a few clicks.
1. Log in to your Hidora Cloud dashboard.
2. Navigate to the Marketplace (under the Clusters category).
3. Find the Kubernetes Cluster and specify your settings in the installation dialog:
- Select a Kubernetes version for your cluster
- Choose K8s Dashboard
- Specify the needed topology: development or production. Development topology includes one control plane and one scalable worker node, the production one consists of multi control-plane with API balancers and scalable workers.
- Select the preferable ingress controller for your cluster (NGINX, Traefik, or HAProxy)
- Pick clean or custom cluster for your deployment
- Enable or disable a dedicated NFS Storage with dynamic volume provisioning (GlusterFS with three additional nodes)
- Add the modules you need, they can be enabled later via the add-ons
- Specify the name of your environment
- Provide an environment alias
- Select a region
4. Click Install and in a few minutes the Hidora Cloud Platform automatically configure your Kubernetes cluster.
Hidora Cloud automation technologies ensure smooth integration make working in a cloud environment really easy.
Matthieu Robin is CEO at Hidora, an experienced strategic leader, a former system administrator who had managed and configured more environments manually than anyone on the planet and after understanding that it could be done in several clicks established Hidora SA. He regularly speaks at conferences and helps companies to optimize business processes using DevOps. Follow him on Twitter.