Kubernetes is an open-source platform built to manage containerized workloads and services through declarative configuration. The project was open-sourced in 2014 by Google, and built upon decades of experience running production workloads at scale using Borg.
What Kubernetes Is
Given the large feature-set Kubernetes exposes, it can be reasoned about in a number of ways:
- a container platform
- a microservices platform
- a portable cloud platform
Ultimately, Kubernetes was designed to serve as a platform to build an ecosystem of tools to make it easier to deploy, manage and scale applications. It provides container-centric management through the orchestration of compute, networking, and storage infrastructure. These characteristics intrinsically make Kubernetes as simple as a PaaS, while maintaining the power and flexibility of IaaS. Most importantly, it provides a standardized platform which is portable across infrastructure providers.
What Kubernetes Is Not
Now that we know what Kubernetes is, it is important to understand what it is not. To start, it is not a traditional PaaS. Kubernetes operates at the container level, not the hardware level. While it does provide some standard PaaS features such as deployments, scaling, load balancing, logging and monitoring, these options are both optional and pluggable. In short, Kubernetes provides low-level building blocks for building the platform while preserving flexibility where it is important.
Kubernetes does not:
- limit the types of supported applications. It aims to support an extremely diverse variety of workloads, both stateless and stateful. In general, if an application can run in a container, it can run on Kubernetes.
- build your application into a containerized image, nor does it directly deploy your source code. CI / CD workflows are determined by organizational preferences and technical requirements.
- provide application-level services such as middleware, data-processing frameworks, databases, caches, or storage systems as built-in services. These services can run on Kubernetes, as well as be accessed by applications running on Kubernetes.
- provide logging, monitoring or alerting solutions. Instead, it exposes low-level mechanisms to collect and export metrics.
- mandate a configuration language or system. It provides a declarative API that may be targeted by arbitrary forms of declarative specifications.
- adopt any machine configuration or management.
Finally, Kubernetes should not be thought of as a mere orchestration system. In fact, it technically eliminates the need for orchestration. The technical definition of orchestration is the execution of a defined workflow (i.e. first do A, then B, and then C). In contrast, Kubernetes is comprised of a set of independent, composable control processes that continuously drive the current state of the system towards the desired state of the system. Ultimately it does not matter how you get from step A to step C. This results in a system that is easier to use in addition to being more powerful, robust, resilient, and extensible.
Traditionally, applications were deployed to virtual machines using the OS package manager. While this approach was straightforward, it intertwined application executables, configuration and required libraries with that of the host system. As a means to mitigate some of the shortcomings of this approach, some operators built immutable virtual-machine images to achieve higher predictability when releasing software (and rolling it back). While this was a step in the right direction, Virtual Machines are heavyweight and non-portable.
Today, the best way to deploy applications is by using containers based on OS virtualization, as opposed to hardware virtualization. Containers are isolated from one another, as well as from the underlying host. They have their own filesystems, cannot see each others processes, and their computational resources can be bounded. Since they are decoupled from the underlying infrastructure and filesystem, containers are portable across clouds and operating systems.
Since containers are lightweight, each container image traditionally contains a single application. Maintaining a one-to-one application-to-image relationship unlocks the full benefits of containerization. Immutable container images can be created at release time rather than deployment time, since each application is isolated from the rest of the application stack and infrastructure. The generation of images at release enables consistency between development and production environments. Another benefit of containers over virtual machines is transparency. Containers are easier to monitor and manage, especially when managed by infrastructure as opposed to hidden by a supervisor process.
Summarized list of container benefits:
- Agile application creation and deployment - increased ease of artifact creation as opposed to traditional VM image creation.
- CI / CD integration - provides a consistent build and release pipeline for quick and easy rollbacks.
- Separation of concerns - building immutable artifacts at release decouples applications from infrastructure.
- Observability - health metrics are more easily monitored.
- Environment consistency - applications run consistently on development machines and production machines.
- Portability - easily port applications across clouds and on-premises, regardless of underlying OS.
- Application-centric management - the level of abstraction is raised from running an OS on hardware to running an application on an OS.
- Loose coupling - applications are more easily broken into smaller, independent pieces which are deployed and managed separately.
- Resource isolation and utilization - containers are highly efficient, dense, and provide predictable application performance.
The name Kubernetes is of Greek origin, and means helmsman or pilot. It is the root of the words governor and cybernetic.
Kubernetes is often abbreviated as K8s, derived from replacing the substring "ubernete" with the number eight.