What is Kubernetes? Your next application platform

Kubernetes automates container-based application deployment, management, scaling, and more. Here's everything you need to know about Kubernetes.

1 2 Page 2
Page 2 of 2

Benefits of using Kubernetes

Because Kubernetes introduces new abstractions and concepts, and because the learning curve is high, it’s only normal to ask what the long-term payoffs are for using it. Here are some of the benefits of running applications inside Kubernetes.

Kubernetes automates application management

One of the most basic duties Kubernetes takes off your hands is the busy work of keeping an application up, running, and responsive to user demands. It automates application health, replication, load balancing, and hardware resource allocation.

Kubernetes applications that become “unhealthy,” or don’t conform to the definition of health you've specified for them, can be automatically repaired. Kubernetes also lets you set soft and hard limits on application resource usage, including memory, storage I/O, and network bandwidth. Applications that use minimal resources can often be packed together on the same hardware; ones that need to stretch out can be placed on systems where they have room to grow. And again, rolling out updates across a cluster, or rolling back if updates break, can be automated.

Kubernetes eases deployment

Package managers such as Debian Linux’s APT and Python’s Pip save users the trouble of manually installing and configuring an application. This is especially handy when an application has multiple external dependencies.

Helm is essentially a package manager for Kubernetes. Many popular software applications must run in Kubernetes as a group of interdependent containers. Helm provides a definition mechanism, a “chart,” that describes how an application or service can be run as a group of containers inside Kubernetes.

You can create your own Helm charts from scratch, and you might have to if you’re building a custom application to be deployed internally. But if you’re using a popular application that has a common deployment pattern, there is a good chance someone has already composed a Helm chart for it and published it in the Artifact Hub. Another place to look for official Helm charts is the Kubeapps directory, which allows Kubernetes applications to be deployed and installed from within a Kubernetes cluster itself, using a handy web-based interface.

Kubernetes simplifies application resource management

Containers are meant to be immutable; the code and data you put into them isn’t supposed to change. But applications need state, meaning they need a reliable way to deal with external storage volumes. That’s made all the more complicated by the way containers live, die, and are reborn across the lifetime of an application.

Kubernetes provides abstractions to allow containers and applications to deal with storage in the same decoupled way as other resources. Many common kinds of storage, from Amazon EBS volumes to plain old NFS shares, can be accessed via Kubernetes storage drivers, called volumes. Normally, volumes are bound to a specific pod, but a volume subtype called a persistent volume (PV) can be used for data that needs to live on independently of any pod.

Containers often need to work with secrets—credentials like API keys or service passwords that you don’t want hard-coded into a container or stashed openly on a disk volume. While third-party solutions are available for this, like Docker secrets and HashiCorp Vault, Kubernetes has its own mechanism for natively handling secrets, although it does need to be configured with care.

Hybrid cloud and multi-cloud deployments

One of the long-standing dreams of cloud computing is to be able to run any application in any cloud, or in any mix of clouds public or private. This isn’t just to avoid vendor lock-in, but also to take advantage of features specific to individual clouds.

For some time, the most common mechanism for keeping multiple clusters in sync with one another across multiple regions and clouds was a Kubernetes SIG project called KubeFed, for Kubernetes Cluster Federation. In a federation, a given application deployment can be kept consistent between multiple clusters, and different clusters can share service discovery so that a back-end resource can be accessed from any cluster. Federations can also be used to create highly available or fault-tolerant Kubernetes deployments, whether or not you’re spanning multiple cloud environments.

However, as of September 2023, the KubeFed project has been archived. The widely expressed feeling was that the project was overly broad and stagnant. Other projects have emerged as possible successors. One is Karmada, which uses Kubernetes-native APIs to synchronize applications across clusters, and requires no changes to applications themselves.

Where to get Kubernetes

Kubernetes is available in many forms—from open source bits to commercially backed distribution to public cloud service. The best way to figure out where to get Kubernetes is by use case.

  • If you want to do it all yourself: The source code, and pre-built binaries for most common platforms, can be downloaded from the GitHub repository for Kubernetes. If you want to try out a tiny instance of Kubernetes on your own system, you can use Minikube to set up a local cluster on a single machine.
  • If you’re using Docker: Docker’s most recent editions come with Kubernetes as a pack-in. This is ostensibly the easiest way for container mavens to get a leg up with Kubernetes, since it comes by way of a product you’re almost certainly already familiar with. (Docker can also use Minikube for deployments.)
  • If you’re deploying on-prem or in a private cloud: Chances are good that any infrastructure you choose for your private cloud has Kubernetes built-in. Standard-issue, certified, supported Kubernetes distributions are available from dozens of vendors.
  • If you’re deploying in a public cloud: The three major public cloud vendors all offer Kubernetes as a service. Google Cloud Platform offers Google Kubernetes Engine. Microsoft Azure offers the Azure Kubernetes Service. And Amazon has added Kubernetes to its existing Elastic Container Service. Managed Kubernetes services are also available from many vendors

Kubernetes tutorial

Now that you’ve got the basics under your belt, are you ready to get started with Kubernetes? A variety of tutorials can help you play around with Kubernetes and learn how to use it in your own work. You might want to start off with the simple tutorials on the Kubernetes project site itself; when you’re ready for something more advanced, check out the list of guides in the awesome-kubernetes repo, which has something for everyone.

Kubernetes certification

If you feel like you have a good handle on how Kubernetes works and you want to be able to demonstrate your expertise to employers, you might want to check out the pair of Kubernetes-related certifications offered jointly by the Linux Foundation and the Cloud Native Computing Foundation:

  • Certified Kubernetes Administrator: Seeks to “provide assurance that CKAs have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators,” including application lifecycle management, installation, configuration, validation, cluster maintenance, and troubleshooting.
  • Certified Kubernetes Application Developer: Certifies that "users can design, build, configure, and expose cloud native applications for Kubernetes.”

The certification exams are $395 each. There are also accompanying training courses, which can serve as a good, structured way to learn more about Kubernetes.

Copyright © 2023 IDG Communications, Inc.

1 2 Page 2
Page 2 of 2