Get the report
MoreComplete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.
September 22, 2020
Kubernetes is a broad platform that consists of more than a dozen different tools and components. Among the most important are:
If you use Kubernetes to manage containers, this will require a container runtime, which is the software that runs individual containers. Kubernetes supports a number of container runtimes; the most popular are Docker, containerd, and cri-o.
[Free Case Study: Informatica Migrates to Kubernetes]
There are several other Kubernetes components (such as a Web interface and a monitoring service) that you might choose to deploy, depending on your needs and configuration. The official Kubernetes documentation describes these components in more detail.
In order to get started with Kubernetes, you should familiarize yourself with the essential concepts that Kubernetes uses to manage the different components of a Kubernetes deployment. They include:
Nodes are servers that host Kubernetes environments. They can be physical or virtual machines. It’s possible to run Kubernetes with just a single node (which you might do if you are testing Kubernetes locally), but production-level deployments almost always consist of multiple nodes.
Nodes can be either “masters” or “workers.” Master nodes host the processes (like kube-scheduler) that manage the rest of the Kubernetes environment. Worker nodes host the containers that power your actual application. Worker nodes were known as "minions" in early versions of Kubernetes, and sometimes you may still hear them referred to as such.
Groups of containers that are deployed together. Typically, the containers in a kubernetes pod provide functions that are complementary to each other; for instance, one container might host an application frontend while another provides a logging service. It’s possible to have a pod that consists of just one container, too.
Services are groups of pods. Each Service can be assigned an IP address and a resolvable domain name in order to make its resources accessible via the network.
A cluster is what you get when you combine nodes together (technically, a single node could also constitute a cluster). It’s most common to have one cluster per deployment and, if desired, workloads divided within the cluster using namespaces. However, in certain cases you might choose to have multiple clusters; for instance, you might use different clusters for hosting a test and a production version of the same application. That way, if something goes catastrophically wrong with your test cluster, your production cluster will remain unaffected.
When you create a Kubernetes pod, you then can create environment variables for the various containers that run within the pod. Setting up deployment environment variables is simple. You must include the env or envFrom fields in the configuration files.
You can define namespaces in Kubernetes to separate a Kubernetes cluster into different parts and allow only certain resources to be accessible from certain namespaces. For example, you might create a single Kubernetes cluster for your entire company, but configure a different namespace for each department in the company to use to deploy its workloads. Generally speaking, using namespaces to divide clusters into virtually segmented parts is better than creating a separate cluster for each unit.
When you refer to Kubernetes replicas, you're referring to how many pods of the same application should run in a Kubernetes Cluster, which is a combination of nodes.
Kubernetes autoscaling automatically scales a cluster up or down to optimize resource usage and costs.
[Infographic: Kubernetes, Lambda, Docker Adoption]
Kubernetes is primarily a Linux-based technology. The core infrastructure on which Kubernetes runs must be configured using some kind of Linux distribution. However, starting with Kubernetes version 1.14, it is possible to include Windows machines within Kubernetes clusters, although those servers are limited to operating as worker nodes. In this way, Kubernetes can be used to orchestrate containerized applications that are hosted using Windows containers as well as Linux ones.
The approach you take to getting started with Kubernetes will depend on which type of deployment you are setting up: a local Kubernetes environment for learning purposes or a large-scale, distributed Kubernetes cluster for production deployment.
If your goal is to run Kubernetes locally for learning purposes, the most seamless approach is to use a Kubernetes distribution designed specifically for this purpose. MicroK8s, Minikube, and K38s are popular options. Installation methods vary depending on which distribution you choose and which Linux-based operating system is hosting your installation, but the process is typically quite simple.
[Learn More: Kubernetes and Postgres]
For instance, you can install MicroK8s on Ubuntu with a single short command:
sudo snap install microk8s --classic
After that, you can start interacting with your Kubernetes environment using the microk8s.kubectl CLI tool. There are additional packages you may wish to install to add more functionality to your local Kubernetes environment, but if you're just getting started with Kubernetes, this is all you have to do to get the bare essentials up and running on a local, single-host Kubernetes machine.
Not surprisingly, things are a bit more complicated if you are setting up a Kubernetes cluster that runs on multiple servers and needs production-grade reliability and functionality.
To set up a production Kubernetes cluster, you follow these steps:
The Kubernetes setup process described above represents everything you have to do if you set up Kubernetes manually. Fortunately, Kubernetes distributions provide interactive installation tools (such as Canonical's Charmed Kubernetes tool for Ubuntu and the atomic-openshift-installer tool for OpenShift) that will walk you through the process of installing and configuring the various Kubernetes components. Or, if you use a fully managed Kubernetes service in the cloud, you don’t need to set anything up at all, as it’s already done for you.
With your Kubernetes cluster up and running, you are ready to start deploying applications. You can deploy as many apps as you want using a single cluster (up to the limits of what your hardware resources can reasonably support). As noted above, apps can be isolated from one another using namespaces, which makes it easy to deploy many applications on the same infrastructure without worrying that one app can intrude on another.
As with most things related to Kubernetes, the exact approach you take for deploying an app depends on which app you are deploying and how your Kubernetes cluster is set up. But in most cases, the process looks like the following:
First, you need a containerized image of the app that you want to run. Prebuilt container images for popular apps (such as WordPress, Node.js, or MySQL, to name just a few examples) are available from Docker Hub or other public container registries. If you are deploying a custom app, you will need to package it as a container and upload it to a registry yourself.
Use kubectl to deploy the app to your cluster. There are two main ways to do this. One is to use the kubectl create command, which tells kubectl to deploy the app based on configurations you specify on the command line. The other is to use kubectl apply; with this approach, you first create a configuration file telling Kubernetes how to deploy the app, and then you use the kubectl apply command to tell Kubernetes to put that configuration into force.
The former strategy is an example of imperative management, while the latter is declarative management. Both approaches have their benefits and drawbacks, but if you’re just getting started, the imperative approach (with kubectl create) is simpler.
Once your app is deployed, Kubernetes does all of the dirty work required to keep it running healthily. If one of the nodes hosting the app fails, Kubernetes will automatically move it to another node. If network or compute resources in one part of the cluster become constrained, Kubernetes will make others available to the app to ensure that things keep running smoothly.
In some Kubernetes distributions, apps are not exposed to the Internet by default. If that is the case, and you want the app to be accessible over the Internet, you will need to use the kubectl expose command to make the app available over the public Internet.
[Learn More: Kubernetes Logs]
As an alternative to working from the command line with kubectl, many of the operations described above can also be performed using graphical user interfaces (GUIs). The most commonly used tool for this purpose is the Kubernetes Dashboard, an official Web UI developed as part of the Kubernetes project. In addition, many third-party Kubernetes GUIs are available as part of Kubernetes distributions.
Kubernetes GUIs do not provide all of the functionality of kubectl, so it’s wise to teach yourself how to use kubectl, too. But for common tasks like deploying an application or seeing which applications are running on a cluster, the GUI solutions come in handy.
Kubernetes is a powerful tool -- or, as is perhaps more accurate to say, a powerful set of tools. Given the complexity of the platform, taking your first steps toward using Kubernetes may seem daunting. But Kubernetes becomes simple to use once you understand the core concepts behind its architecture and familiarize yourself with key tools like Kubectl. Installation and configuration tools provided by certain Kubernetes distributions, as well as GUI management tools, make getting started with Kubernetes even easier.
As cloud infrastructure grows and develops, reliable and safe management of containers across multiple cloud providers grows increasingly important - accelerating the adoption of Kubernetes (K8s). Orchestration technologies like Kubernetes (K8s) automate the deployment and scaling of containers, and they also ensure the reliability of applications and workloads running on containers.
The open source nature of Kubernetes allows it to be implemented across multiple Cloud providers, breaking the vendor lock-in for selecting a cloud-hosted service. Though originally developed by Google, Kubernetes is managed by the Cloud Native Computing Foundation (CNCF) and not designed specifically for any Cloud provider.
[Learn More: Kubernetes and Google]
Organizations can run applications in pods managed by Kubernetes on-premise as well as any cloud environment.
Using the data in Sumo Logic’s 2020 Continuous Intelligence Report, we can gain insight into the adoption of Kubernetes (K8s) as well as competing technologies such as Amazon’s Container Orchestration Service (ECS). This data is sourced anonymously from over 2,100 Sumo Logic customers and gives us a view into the adoption of the technologies being monitored by enterprise customers.
Multi-cloud use is highly correlated with higher Kubernetes adoption (Figure A). The more Cloud providers enterprises use, the more likely they use Kubernetes to manage containers. Kubernetes adoption among Sumo Logic’s customers using only AWS as a host is 25%, and Kubernetes adoption jumps to 88% when looking at customers using a mix of the top three Cloud hosting services, AWS, GCP, and Azure.
Customers like LendingTree tell us the benefit of choosing Kubernetes and Sumo Logic: “We’re deploying Kubernetes to give us the option of selecting the optimal blend of cloud vendors to precisely meet our needs. This would be impossible without Sumo Logic’s cloud-neutrality.” Staff Site Reliability Engineer at LendingTree. Learn more about how LendingTree uses Sumo Logic for Kubernetes.
As of the 2020 report, 43% of Sumo Logic’s AWS customers are now using either Amazon ECS or Kubernetes (K8s) for container orchestration (Figure B). Sumo Logic provides native integrations with best practice data sources for Kubernetes—Prometheus, OpenTelemetry, FluentD, Fluentbit, and Falco. Sumo Logic works for any Kubernetes setup, anywhere—on-premises, AWS, Azure, and GCP.
Among Sumo Logic’s customers on AWS, Kubernetes accelerated in popularity over the last three years (Figure B). This is likely due to Amazon offering it’s own Elastic Kubernetes Services (EKS) to make deploying Kubernetes simpler to manage. Note, Amazon EKS customers are included in the count of Kubernetes customers.
Today, Amazon, Google, and Microsoft all offer their own version of a “managed” Kubernetes service. Amazon’s Elastic Kubernetes Service (EKS), Google Kubernetes Engine, and Azure Kubernetes Service are the most common cloud infrastructure services offering their own version of a secured and managed Kubernetes service with automatic scaling and multi-cluster support. All of these Cloud services can be monitored by Sumo Logic’s Kubernetes App. Learn more about the Sumo Logic Kubernetes App.
We can see from this data that enterprises are rapidly adopting and utilizing Kubernetes to support their multi-cloud applications. Enterprises are betting on Kubernetes to drive their multi-cloud strategies. We can expect to see continued adoption of Kubernetes in the next few years as well as the adoption of the managed Kubernetes services.
To learn more about the cloud and the state of modern applications, download the Sumo Logic’s 2020 Continuous Intelligence Report.
Reduce downtime and move from reactive to proactive monitoring.
Monitor, troubleshoot and secure your Kubernetes clusters with Sumo Logic cloud-native SaaS analytics solution for K8s.
Learn more