Kubernetes in nutshell

What is the Kubernetes

Kubernetes is a container orchestration system so we need first to know what does containerization mean

Containerization is defined as a form of operating system virtualization, through which applications are run in isolated user spaces called containers, all using the same shared operating system (OS). A container is essentially a fully packaged and portable computing environment.

Everything an application needs to run — its binaries, libraries, configuration files, and dependencies — is encapsulated and isolated in its container.

The container itself is abstracted away from the host OS, with only limited access to underlying resources — much like a lightweight virtual machine (VM)

As a result, the containerized application can be run on various types of infrastructure — on bare metal, within VMs, and in the cloud — without needing to refactor it for each environment.

So the next question how does containerization work? and What Differentiates Containerization from Virtualization?

Each container is an executable package of software, running on top of a host OS. A host(s) may support many containers (tens, hundreds, or even thousands) concurrently, such as in the case of a complex microservices architecture that uses numerous containerized ADCs. This setup works because all containers run minimal, resource-isolated processes that others cannot access

There are many differences between containerization and virtualization

A VM runs on top of a hypervisor, which is specialized hardware, software, or firmware for operating VMs on a host machine, like a server or laptop.

Via the hypervisor, every VM is assigned not only the essential bins/libs but also a virtualized hardware stack including CPUs, storage, and network adapters

To run all of that, each VM relies on a full-fledged guest OS. The hypervisor itself may be run from the host’s machine OS or as a bare-metal application

There is significant overhead involved, due to all VMs requiring their own guest OSes and virtualized kernels, plus the need for a heavy extra layer (the hypervisor) between them and the host.

So why we need the Kubernetes?

In fact, we need to explain why we need the orchestration in general first

What if your application relies on other containers such as DB or messaging/middleware services or other back-end services? What if the number of users increases and you need to scale your application? You would also like to scale down when the load decreases.

To enable these functionalities you need an underlying platform with a set of resources. The platform needs to orchestrate the connectivity between the containers and automatically scale up or down based on the load.

also running a server cluster on a set of Docker containers, on a single Docker host is vulnerable to a single point of failure.

After you know why and when you need the container orchestration let me explain the Kubernetes

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced

the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running

production workloads at scale with best-of-breed ideas and practices from the community.

  • Service discovery and load balancing
  • Storage orchestration
  • Automated rollouts and rollbacks
  • Automatic bin packing
  • Self-healing
  • Secret and configuration management

Kubernetes can not provide

  • Does not limit the types of applications supported
  • Does not deploy source code and does not build your application
  • Does not provide application-level services
  • Does not provide nor adopt any comprehensive machine configuration, maintenance, management, or self-healing systems
  • Kubernetes provides an auto-scaling function
  • Kubernetes can overcome constraints of docker and docker API in case the swarm limited to docker API capabilities.

A container orchestrator is essentially an administrator in charge of operating a fleet of containerized applications. If a container needs to be restarted or acquire more resources, the orchestrator takes care of it for you.
That’s a fairly broad outline of how most container orchestrators work

and a container orchestration system is a way to manage the lifecycle of containerized applications across an entire fleet. It’s a sort of meta-process that grants the ability to automate the deployment and scaling of several containers at once. Several containers running the same application are grouped together. These containers act as replicas and serve to load balance incoming requests. A container orchestrator, then, supervises these groups, ensuring that they are operating correctly.

What is Worker Node in Kubernetes Architecture?

Worker nodes listen to the API Server for new work assignments; they execute the work assignments and then report the results back to the Kubernetes master node.

In the end, this only what is K8s but in nutshell, so I highly recommend checking K8s documentation and read the “Kubernetes Patterns: Reusable Elements for Designing Cloud-Native Applications” book.

Software engineer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store