What is Kubernetes?
Kubernetes is a cluster orchestration system. It was designed from the start to be an environment for building distributed applications from containers.
The main task of Kubernetes is to build and manage distributed systems.
The goal of Kubernetes is to simplify the organisation and scheduling of your application through the fleet. At a high level, it is the operating system of your cluster.
Basically, it allows you to not worry about the specific machines in the data centre that run your application components.
In addition, it provides common primitives to test and replicate your application on these machines. As well as services to connect your application to microservices so that each layer of your application is separate from the other layers and you can scale/update/maintain them independently of each other.
Although you can perform many of these functions at the application level, these solutions are usually ad hoc and unreliable. It is much better to separate the issues when the orchestration system is responsible for running the application, and you are only thinking about your application code.
Why is Kubernetes abbreviated to k8s?
Have you ever wondered why Kubernetes called the K8S? Let’s try to find out together!
Numeronyms” appeared in the late 1980s. There are many different stories about how people started using them.
But all these stories share the same idea: to simplify communication, employees of computer companies started using digitised forms to shorten words: taking the first and last letters and putting a number of other letters between them.
For example, the term “i18n” is derived from its spelling: the letter “i” plus 18 letters plus the letter “n”. The same is true for Kubernetes, “k” plus 8 letters plus the letter “s”.
How is Kubernetes different from Docker Swarm?
Docker Swarm was created by Docker Inc and developed as an orchestrator for Docker. Kubernetes was originally created by Google for internal use (the Borg project), and now the whole open source world is working on it. Docker Swarm is a closed system.
Both are trying to solve the same problem: orchestrating containers across a large number of hosts. They are both fundamentally similar systems, but there are differences.
Kubernetes is growing much faster and has the largest market share (Kubernetes 51% and Docker Swarm 11%).
Docker Swarm is less scalable, there is no built-in support for load balancing, which K8S has. In fact, Docker Swarm only offers one method of load balancing, which relies on opening a fixed port for each service on each server in the cluster.
Unlike Docker Swarm, K8S offers an integrated implementation for internal (east-west) and external (north-south) load balancing. Internal load balancing is provided by the Service object and allows all live instances of another application in a cluster to be reached by referring to the Service object. The Service is an internal load balancer, which knows at any time which backend of the application is alive and which is not and sends requests only to the living replicas.
External load balancing in Kubernetes is provided by the NodePort concept (opening a fixed port on the load balancer), as well as by the built-in LoadBalancer primitive, which can automatically create a load balancer on the cloud site if Kubernetes is running in a cloud environment, e.g. AWS, Google Cloud, MS Azure, OpenStack, Hidora, etc. Docker Swarm does not support Auto-Scaling. Neither container auto-scaling nor node auto-scaling.
Kubernetes supports all possible auto-scaling: vertical scaling with Vertical Pod Autoscaler, horizontal auto-scaling of applications (Horizontal Pod Autoscaler), as well as auto-scaling of the cluster itself (the number of nodes) based on Cluster AutoScaler. Cluster Autoscaler only works if you run Kubernetes on the cloud site.
Docker Swarm has difficulties with monitoring, you can only monitor containers using proprietary tools. Persistent storage for stateful applications is not natively supported by Docker Swarm. You have to work with a networked storage system, which is not suitable for all types of workloads.
Docker Swarm only works with Docker containers. Whereas K8S can work with many container runtime options, such as Docker, Rocket, ContainerD, and others. This is very important because there is no dependency on specific Docker features (some of which are only available in Docker EE).
Why do you need container orchestration?
Deploy applications on servers without worrying about specific servers.
You need container orchestrators when you have a microservice application composed of multiple services, which are themselves composed of containers. And there’s another part - the servers on which that application can run.
The container orchestrator performs many tasks. The first task is to initially place your application’s containers on the required number of servers, taking into account the load of the servers, to prevent the container from reaching the overloaded server, and also considering the memory and CPU requirements for a particular container.
Scaling the application horizontally up and down.
There is another scenario, when an application has to be scaled horizontally, i.e. more containers of the same type have to be added.
The container orchestrator takes care of scaling up or down applications by taking into account the resources consumed on each target server, as well as the resource requirements of each application.
In this case, the orchestrator can support the so-called affinity and anfi-affinity principles. The latter ensures that, for example, all containers of the same type are guaranteed to run on different physical servers.
Restore the application if the server it was running on fails.
This is called self-healing or container reprogramming.
If the server fails, you must restore the application correctly. K8S can monitor each application container/pod to see if that container/pod is alive or not. And if it is not, Kubernetes restarts it.
In fact, in K8S, this function is called “maintaining the right number of replicas”.
In the case of Kubernetes, you can say, I want to have 5 containers of my WordPress, and K8S will make sure you always have 5 containers.
If there are suddenly less than 5 containers, Kubernetes will launch a new container to maintain the quantity. This is important for application processing and load forecasting.
K8S also monitors the status of the servers on which the application is running, and if the server is dead, it will reschedule all its containers on other servers.
How does K8S compare to OpenStack?
Kubernetes is designed to manage the orchestration of containers, and OpenStack is originally designed to manage virtual machines. In other words, OpenStack is used to manage traditional virtual infrastructures, and Kubernetes is used for containers.
These are two different systems, both of which are open source systems, developed by the community. The difference is that Kubernetes was originally developed by Google for a long time under the name Borg and during that time it became a stable service, even the first version was feature rich and quite stable.
OpenStack was developed almost from scratch by the community and OpenStack is very fragmented.
The community and about 30 different companies create their own versions. K8S is more like Apple, and OpenStack is more like Android.
The overhead and time-to-market for running Kubernetes in production is much lower than with OpenStack.
At the same time, OpenStack manages virtual machines, e.g. KVM, Kubernetes can run on top of OpenStack and this is a typical scenario for cloud providers.
Is Kubernetes only for stateless applications?
Kubernetes is not only suitable for stateless applications. An example of a stateless application is usually web applications that execute a function, provide HTTP pages and do not store their state themselves.
It was originally designed for stateless applications and support for stateless applications was very limited and unreliable.
Kubernetes supports the concept of persistent volume and persistent volume claims. It also supports different types of volumes, such as block stores that can be mounted on pods in exclusive mode, as well as file stores that can be mounted on multiple pods simultaneously using the NFS protocol, for example.
Therefore, you can safely place persistent databases or message queues in Kubernetes.
Can I use it for a hybrid cloud deployment? Multi-Cloud?
Initially, K8S could only be deployed in a single data centre and hybrid working was not available.
Secondly, the Kubernetes federation was developed, which made it possible to organise a hybrid scenario where there are multiple Kubernetes clusters in different data centres that are controlled from a single control panel.
In fact, one of the clusters becomes the main federated cluster and the hybrid cloud control panel is installed there. With it, you can manage deployments in multiple clusters that are far apart from each other.
But it is also possible in the multi-cloud scenario. Let’s say you have one cluster on-premises, one cluster on Amazon, one on Google Cloud, all of them will be supported.
In the case of multiple cloud providers, there is no need to adapt the application to work with multiple remote clusters working at Google, Amazon, as Kubernetes will correctly interpret the annotations in their resources for each cloud.
The Kubernetes v1 federation project has been in development for a long time and has reached a dead end, the Kubernetes v2 federation is under development, which basically works in a different way with a hybrid cloud.
Do I need to manage an external load balancer to run my applications in Kubernetes?
In the case of Docker Swarm, it was necessary to manage an external load balancer to balance the load.
K8S contains automatic load balancing based on the service concept and, in addition, an input controller that supports load balancing by DNS name and path.
Many options are supported, there is also integration with the cloud load balancer, if you are working somewhere on Amazon, google cloud or open-stack, there is out of the box integration with these load balancers.
What is the difference between a pod and a container?
The problem is that people who come from the Docker world only work with containers.
They are trying to transfer the knowledge they have gained from using Docker Swarm and containers to work with Kubernetes. But it doesn’t work that way. In Kubernetes, the control unit is the pod, not the container.
A pod is a group of one or more containers that perform the same function, i.e. it is a component of a single application. Kubernetes manages the pods, scales them and monitors their status. The application in Kubernetes is scaled by the number of pods, but not containers.
In a pod, there is usually one container, but there may be several and this set of containers is rigidly fixed. Within a pod, the containers do not scale, which must be taken into account when designing applications.
How does Kubernetes simplify container deployment?
There are several advantages that Kubernetes offers from the outset:
- In Kubernetes, service discovery is built-in, i.e. when a new service appears, it is given a unique domain name and other services can get information about it in etcd.
- The second point is that Kubernetes offers flexible application deployment manifests that support different deployment strategies for containerised applications. Kubernetes has canary deployment support for a/b testing. There are also built-in health checks and monitoring based on Prometheus.
- Kubernetes uses a continuous update strategy to deploy updates to pod versions. The continuous update strategy avoids application downtime by keeping some instances running at all times while updates are being performed. When the new pods in the new deployment version are ready to handle traffic, the system activates them first and only then shuts down the old ones.