Back to blog results

12월 12, 2023 By Melissa Sussmann

5 business reasons why every CIO should consider Kubernetes

CIO should consider Kubernetes

Who should read this?

You should read this if you are an executive (CIO/CISO/CxO) or IT professional seeking to understand various Kubernetes business use cases. You’ll address topics like:

  • Why should I consider Kubernetes?
  • Where can I see the value of this technology?
  • Who else has seen similar value from their Kubernetes deployment?

Many enterprises adopting a multi-cloud strategy and breaking up their monolithic code realize that container management platforms like Kubernetes are the first step to building scalable modern applications. Additionally, to get all of the benefits of Kubernetes, enterprises need security and monitoring platforms for Kubernetes applications. 

The road to digital transformation and ITOps runs through containers and orchestration

The key to modern applications is utilizing microservices for breaking up your monolithic code. Your digital transformation journey will depend on containerized application - and orchestration automation — to speed up app deployment and maintain highly available, secure customer experiences. While the business benefits of digital transformation and software innovation are clearly understood, the IT capabilities needed to deliver these benefits are still evolving.

Containers are becoming a must-have platform for IT architecture. Containers offer the benefits of immutable infrastructure with predictable, repeatable and faster development and deployments. With these capabilities, containers change how applications are architected, designed, developed, packaged, delivered and managed, paving the way to better application delivery and experience.

But the very strength of containers can become its Achilles heel — creating many containers across your apps is very easy. And now we have a new problem: managing thousands or even tens of thousands of these containers. How do you control the ephemeral containers with lifetimes of a few seconds to minutes? How do you optimize resource utilization in large-scale containerized environments? The answer is container orchestration tools like Kubernetes.

What is Kubernetes?

Kubernetes is a system for application deployment that enables efficient use of the containerized infrastructure that powers modern applications. Kubernetes can save organizations money because it takes less headcount to manage IT operations by making apps more resilient and performant. So, it's no surprise that Kubernetes adoption has been growing. 

You can also run Kubernetes on-premises or within the public cloud. AWS, Azure and Google Cloud Platform (GCP) offer managed Kubernetes solutions to help you start quickly and efficiently operating K8s apps. Kubernetes also makes apps much more portable, so IT can move them more easily between different clouds and internal environments.

Kubernetes is the most popular open-source project from the Cloud Native Computing Foundation (CNCF), with active engagement and contribution from many enterprises, large and small.

[EBOOK] Kubernetes monitoring

Learn how to monitor, troubleshoot, and secure your Kubernetes environment with Sumo Logic.

OK, so what specifically can Kubernetes do for me?

Here are five fundamental business capabilities that Kubernetes can drive in the enterprise – large or small. And to add teeth to these use cases, we have identified some real-world examples to validate the value that enterprises are getting from their Kubernetes deployments:

  1. Faster time to market
  2. IT cost optimization
  3. Improved scalability and availability
  4. Multi-cloud flexibility
  5. Effective migration to the cloud

1. Faster time to market

Kubernetes enables a microservices approach to building apps. Now you can break up your development team into smaller teams focusing on a single, smaller microservice. These teams are smaller and more agile because each has a focused function. APIs between these microservices minimize the cross-team communication required to build and deploy. So, you can scale multiple small teams of specialized experts who each help support a fleet of thousands of machines.

Kubernetes also allows your IT teams to manage container orchestration more efficiently by handling many of the nitty-gritty details of maintaining container-based apps. For example, Kubernetes handles service discovery, helps containers talk to each other and arranges access to storage from various providers such as AWS and Microsoft Azure.

Airbnb’s transition from a monolithic to a microservices architecture is pretty amazing. They needed to scale continuous delivery horizontally, and the goal was to make continuous delivery available to the company’s 1,000 or so engineers so they could add new services. Airbnb adopted Kubernetes to support over 1,000 engineers concurrently configuring and deploying over 250 critical services to Kubernetes. The net result is that AirBnb can do over 500 deployments per day on average.

One of the best examples of accelerating time to market comes from Tinder. This blog post describes Tinder’s K8s journey well. Here’s the summary version of the story: Due to high traffic volume, Tinder’s engineering team faced challenges of scale and stability. And they realized that the answer to their struggle was Kubernetes. Tinder’s engineering team migrated 200 services and ran a Kubernetes cluster of 1,000 nodes, 15,000 pods and 48,000 running containers. While the migration process wasn't easy, the Kubernetes solution was critical to ensure smooth business operations going forward.

2. IT cost optimization

Kubernetes can help your business cut infrastructure costs quite drastically if you’re operating on a massive scale. Kubernetes makes a container-based architecture feasible by packing together apps optimally using your cloud and hardware investments.

Before Kubernetes, administrators often over-provisioned their infrastructure to conservatively handle unexpected spikes or simply because manually scaling containerized applications was difficult and time-consuming. Kubernetes intelligently schedules and tightly packs containers, considering the available resources. It also automatically scales your application to meet business needs, thus freeing up human resources to focus on other productive tasks.

Spotify is an early K8s adopter and has significant cost-saving values by adopting K8s. Leveraging K8s, Spotify has seen 2-3x CPU utilization using the orchestration capabilities of K8s, resulting in better IT spend optimization.

Pinterest is another early K8s customer. Leveraging K8s, the Pinterest IT team reclaimed over 80 percent of capacity during non-peak hours. They now use 30 percent fewer instance hours per day than the static cluster.

3. Improved scalability and availability

The success of today’s applications depends not only on features but also on the application's scalability. After all, if an application cannot scale well, it will be highly non-performant at best and totally unavailable in the worst case.

As an orchestration system, Kubernetes is a critical management system to auto-scale and improves app performance. Suppose we have a CPU-intensive service with a dynamic user load that changes based on business conditions. For example, an event ticketing app will see dramatic users and loads before the event and low usage at other times.

We need a solution to scale up the app and its infrastructure so that new machines are automatically spawned up as the load increases, e.g. more users are buying tickets, and scale down when the load subsides. Kubernetes service offers just that capability by scaling up the application as the CPU usage exceeds a defined threshold.

When the load reduces, Kubernetes can scale back the application, thus optimizing the infrastructure utilization. The Kubernetes auto-scaling is not limited to just infrastructure metrics. Any type of resource utilization metrics, even custom metrics, can be used to trigger the scaling process.

LendingTree has many microservices that make up its business apps. LendingTree uses Kubernetes and its horizontal scaling capability to deploy and run these services and ensure that their customers can access service even during peak load. To get visibility into these containerized and virtual services and monitor its Kubernetes deployment, LendingTree uses Sumo Logic.

4. Multi-cloud flexibility

One of the biggest benefits of Kubernetes and containers is that it helps you realize the promise of hybrid and multi-cloud. Enterprises today often run multi-cloud environments and will continue to do so in the future. Kubernetes makes running any app on any public cloud service or combination of public and private clouds much more straightforward.

This allows you to put the right workloads on the right cloud and to help you avoid vendor lock-in. Getting the best fit, using the right features and having the leverage to migrate when it makes sense all help you realize more ROI from your IT investments.

Alaska Airlines is a great example of a customer who is using Kubernetes to operate multi-cloud environments. They need to be prepared in case of natural disasters and emergencies, so having multiple clouds allows for better reliability and security. Alaska Airlines used Kubernetes to optimize the performance and cost of their cloud-hosted systems with the help of Sumo Logic’s observability tool. This ensured effective management of Kubernetes resources and quick response times for potential issues, thereby ensuring operational efficiency.

5. Seamless migration to cloud

Whether you are rehosting (lift and shift of the app), re-platforming (make some basic changes to the way it runs), or refactoring (the entire app and the services that support it are modified to better suit the new compartmentalized environment), Kubernetes has you covered.

Since K8s runs consistently across all environments, on-premise and clouds like AWS, Azure and GCP, Kubernetes provides a more seamless and prescriptive path to port your application from on-premise to cloud environments. Rather than deal with all the variations and complexities of the cloud environment, enterprises can follow a more prescribed path:

  1. Migrate apps to Kubernetes on-premise. Here you are more focused on replatforming your apps to containers and bringing them under Kubernetes orchestration.
  2. Move to a cloud-based Kubernetes instance. You have many options here — run Kubernetes natively or choose a managed Kubernetes environment from the cloud vendor.
  3. Now that the application is in the cloud, you can start to optimize your application to the cloud environment and its services.

Ulta Beauty exemplifies the transformative impact of cloud migration on a retail business in this case study by Sumo Logic. With the help of Kubernetes and a cloud migration strategy, Ulta Beauty was able to significantly enhance their e-commerce growth and security by leveraging microservices and modern application strategies. This leads to more efficient, reliable, and secure digital operations. Moreover, with Sumo Logic’s full-stack observability, Ulta Beauty gained valuable insights for improving their digital customer experience, crucially supporting their two billion dollar e-commerce channel expansion.

Kubernetes security best practices

Kubernetes host operating system security

The first step is to start with a minimized host operating system, which has only the services required to run containers. There is no problem with using a full operating system — that option has more services that must be monitored, configured and patched. Examples of minimized systems include Red Hat Project Atomic, CoreOS Container Linux, Rancher OS and Ubuntu Core.

Next, you’ll need to enable SELinux, allowing better isolation between processes. Most Kubernetes distributions enable SELinux by default. Another layer of security available is seccomp, which can restrict the actual system calls that an individual process can make by assigning profiles.

As a side note, if you are using a container service on a public cloud provider, host security is not as important as the other layers of security, as containers are usually a single-tenant system. Each container runs in its own VM, so it will always have the latest OS patches and no process isolation concerns.

Kubernetes network security

Microservices-based architectures continue to increase in popularity for software development and deploying applications and services in a container environment. They involve multiple containers interacting within pods and across hosts to provide the full suite of required business functionality.

Most public cloud providers use a single-tenant model for their container runtimes and leverage the access control lists and security groups they have built for their existing computing environments. Using this type of deployment, where access can easily be controlled at the point the compute node accesses the network, it is easy to group similar containers under one group, or ACL, and limit access or provide extra security features like Web Application Firewalls and API Gateways.

A higher level of network segmentation is needed in a multi-tenant environment where multiple containers can and will run on the same host. Multi-tenant environments are more common in private and hybrid cloud computing deployments where there is more control of the hardware layer and higher density is a goal.

Unlike in a single-tenant model, traffic can not simply be controlled as it enters the individual compute nodes, as there can be traffic within the individual nodes between container processes that also need to be limited. Kubernetes and the industry have both proven and new networking plugins that can handle this type of traffic. These are always based on Software-Defined Networking (SDN) and range from Open vSwitch (OVS) to Project Calico and Istio.

Securing Kubernetes container images and registries

Let’s assume you’re following best practices on the coding side and only address building, storing and managing container images. For more information on secure coding best practices, both the CMU Software Engineering Institute and OWASP have lists to get you started.

First, let’s address container registries. The best place to retrieve a container image to either use as a base for in-house development, or to run as-is, is from known and trusted public registries. While these images hosted may not be perfect, they have enough eyes looking at them that security issues are often found and resolved promptly, especially on larger projects. The largest and most prevalent of the public Docker registries is Docker Hub.

Most organizations rely on private registries as part of their container strategy. Private registries allow role-based access control and enterprise-friendly features, such as on-premise, even in the most highly secure environments. Most organizations will even store the container images from the public registry in their private registry to ensure they have a copy of what is running in their environment. You never know when there might be an outage on a public registry or a project might be unavailable.

Using a private registry provides the opportunity to have a repository that can be scanned for known vulnerabilities. Ideally, images will be scanned before they are stored in this registry and as they are retrieved (new issues may have happened since the last build) to be deployed by Kubernetes. Black Duck Software and SonarSource are among the companies that provide solutions to scan applications and container images.

Kubernetes logging and analytics

Kubernetes includes cluster-based logging, which can be shipped to a centralized logging facility. These logs become increasingly valuable when combined with the application logs that are consolidated into the same centralized logging facility. Having a single location for log storage and analysis allows better trending and rapid detection of security and other application incidents.

So you deployed Kubernetes. What next?

So there you have it. Kubernetes benefits that every CIO should consider with some real-world examples, too.

But what happens after you deploy Kubernetes? How do you manage Kubernetes? How do you get visibility into Kubernetes? How do you proactively monitor the performance of apps in Kubernetes? How do you secure your application in Kubernetes? That’s where Sumo Logic comes in.

Sumo Logic has a solution built to help your teams get the most out of Kubernetes and accelerate your digital transformation. The solution provides discoverability, observability and security of your Kubernetes implementation and helps you manage your apps better. Want to explore Sumo Logic’s Kubernetes solution? Learn more in our Kubernetes Monitoring ebook.

Navigate Kubernetes with Sumo Logic

Monitor, troubleshoot and secure your Kubernetes clusters with Sumo Logic cloud-native SaaS analytics solution for K8s.

Learn more
Melissa Sussmann

Melissa Sussmann

Lead Technical Advocate

Melissa Sussmann is a technical evangelist with 11+ years of domain expertise and experience as an engineer, product manager, and product marketing manager for developer tools. She loves gardening, reading, playing with Mary Lou (Milou, the poodle), and working on side projects. She is a huge advocate for open source projects and developer experience. Some past projects include: running nodes on the lightning network, writing smart contracts, running game servers, building dev kits, sewing, and woodworking.

More posts by Melissa Sussmann.

People who read this also enjoyed