Kubernetes (also known as k8s or “kube”) is an open source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.

Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology.

Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling, like real-time data streaming through Apache Kafka, because Kubernetes clusters can span hosts across on-premise, public, private, or hybrid clouds.

INTERESTING FACT : The 7 spokes in the Kubernetes logo refer to the project’s original name, “Project Seven of Nine.”


  • Orchestrate containers across multiple hosts.
  • Make better use of hardware to maximize resources needed to run your enterprise apps.
  • Control and automate application deployments and updates.
  • Mount and add storage to run stateful apps.
  • Scale containerized applications and their resources on the fly.
  • Declaratively manage services, which guarantees the deployed applications are always running the way you intended them to run.
  • Health-check and self-heal your apps with auto-placement, auto-restart, auto-replication, and auto-scaling.

But Kubernetes require some other open-source projects as well to provide these orchestrated services , which are as follows:

  1. Registry, through projects like Docker Registry.
  2. Networking, through projects like OpenvSwitch and intelligent edge routing.
  3. Telemetry , through projects such as Kibana , Hawkular and Elastic.
  4. Security, through projects like LDAP, SELinux, RBAC, and OAUTH with multitenancy layers.
  5. Automation, with the addition of Ansible playbooks for installation and cluster life cycle management.
  6. Services, through a rich catalog of popular app patterns.

KUBERNETES & DOCKER : Is K8S Dropping Docker ?


Docker is used to isolate your application into containers. It is used to pack and ship your application.


Kubernetes on the other hand is a container scheduler. It is used to deploy and scale your application.

Docker as an underlying runtime is being deprecated in favor of runtimes that use the Container Runtime Interface(CRI) created for Kubernetes. Docker-produced images will continue to work in your cluster with all runtimes, as they always have.

But this doesn’t mean the death of Docker, and it doesn’t mean we can’t, or shouldn’t, use Docker as a development tool anymore. Docker is still a useful tool for building containers, and the images that result from running docker build can still run in your Kubernetes cluster.


Approximately 2253 companies are there who are currently using Kubernetes, few of them are as follows:



Created by the same developers that built Kubernetes, Google Kubernetes Engine (GKE) is an easy to use cloud based Kubernetes service for running containerized applications. GKE can help us implement a successful Kubernetes strategy for our applications in the cloud. With Anthos, Google offers a consistent Kubernetes experience for our applications across on-premises and multiple clouds. Using Anthos, we get a reliable, efficient, and trusted way to run Kubernetes clusters, anywhere.


At the start of 2016 the Engineering Team of Shopify was running services everywhere, including within their own data centers (using Chef and Docker), on AWS (using Chef) and Heroku. Developers liked the developer experience of Heroku, and this platform actually scales quite well, with simple UI sliders to increase the number of instances and associated CPU and RAM. Although the platform team had defined service tiers and appropriate Service Level Objectives (SLOs) based on criticality to the business, there were many processes that were not scalable, and accordingly these presented challenges as the company grew.

Also the manual or artisanal processes clearly did not scale well, and neither did slow processes that make people wait. Challenges were encountered with within the platform and deployment operations, and also processes that did not work first time or reliably. Accordingly, the Shopify team recognised that they needed to increase their focus on tested infrastructure, and automation that works as expected, everytime. Also critical to the ability to scale was giving developers the ability to safely self-serve in a consistent manner across the infrastructure/platform, and providing comprehensive training to enable them to become experts in the systems they operate. Alongside these new initiatives the organisation also decided to embrace cloud computing, and were keen to promote migration to their chosen cloud vendor, Google Cloud Platform (GCP).

The Shopify engineering team recognised that they were effectively building an internal Platform-as-a-Service (PaaS), and so decided that three principles were key to its success: —1)Operating a platform that would by default meet a high percentage of use cases within Shopify, but also allow customisation if required — 2)There are advantages to knowing about the underlying platform, but many developers do not want to be exposed to all of the details of the internals— 3)Developers should not be bottlenecked by waiting for centralised operations or platform teams.

After analysis and experimentation the Shopify team chose to build their PaaS on top of the Kubernetes container schedulers and orchestrator. Kubernetes had the best traction of the open source projects within this space, it was platform agnostic, it could be extended via the APIs exposed, and it was also offered as a service in GCP — Google Container Engine (GKE) — which allowed the team to focus on the value-adding components they could provide on top on this “strong foundation”.


Netflix chronicled their container journey in a white paper. Running containers at scale requires orchestration, and Netflix started their journey near the beginning of the Kubernetes open source project. Netflix had to decide if it would build its own orchestration platform or adopt an existing platform.

Netflix chose to build a dedicated container orchestration platform called Titus. While Netflix claims most organizations look to write greenfield applications on new container platforms such as Kubernetes, its team wanted to consider existing applications as well. Therefore, Netflix chose to build their Titus container management system on top of Mesophere.

Today, Kubernetes has broad support for brownfield applications. For example, Docker Swarm now integrates Kubernetes into Swarm clusters. Also, operations teams can deploy legacy apps into Docker containers and deploy the containers to Kubernetes clusters.





Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Deliver fast, efficient product innovation with a best-in-class build for open source.

Productise Network Infrastructure

How to distort geodata and why you should

Let’s Go on A Tour of Go (II)

Mobile DevOps with Azure DevOps

Staff guide: adding podcasts

Learn Practical Programming: The Basics

How to connect Python and IBM Database

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Aniruddh Sharma

Aniruddh Sharma

More from Medium

Kubernetes setup



From Kubernetes Namespaces to OpenShift Projects