As cloud infrastructure grows and develops, reliable and safe management of containers across multiple cloud providers grows increasingly important - accelerating the adoption of Kubernetes (K8s). Orchestration technologies like Kubernetes (K8s) automate the deployment and scaling of containers, and they also ensure the reliability of applications and workloads running on containers.
Modern applications are often built, deployed, and managed as container images. At the same time, however, developers are drawn to serverless technologies for running their code without having to worry about infrastructure. AWS is now helping you bridge the gap between these two paradigms by allowing you to build and deploy serverless AWS Lambda functions with container images. This allows you to harness the power of AWS Lambda without having to rewrite or modify your development workflows.
Kubernetes, as a platform, is a comprehensive set of tools for orchestrating containers at scale. It consists of a modular architecture of specific components with a defined purpose. For example, the scheduler finds the ideal match for a particular pod and the kube-proxy manages the networking between the nodes and the master.
Kubernetes is first and foremost an orchestration engine that has well-defined interfaces that allows for a wide variety of plugins and integrations to make it the industry leading platform in the battle to run the world’s workloads. From machine learning to running the applications a restaurant needs, Kubernetes has proven it can run things.
Kubernetes is an open-source container management system developed by Google and made available to the public in June 2014. The goal is to make deploying and managing complex distributed systems easier for developers interested in Linux containers. It was designed by Google engineers experienced with writing applications that run in a cluster.
Automation is a key component in the management of the entire software release lifecycle. While we know it is critical to the Continuous Integration/Continuous Delivery process, it is now becoming equally essential to the underlying infrastructure you depend on. As automation has increased, a new principle for managing infrastructure has emerged to prevent environment drift and ensure your infrastructure is consistently and reliably provisioned.
Kubernetes is an extremely intelligent technology, but without the right direction it can respond in unwanted or unexpected ways. As is true with most “smart” technologies, it is only as smart as the operator. In order to set teams up for peak success with Kubernetes, it is vital they have a pulse on their Kubernetes clusters. Here are 5 ways that engineers can best identify any loose ends when setting up a Kubernetes cluster and ensure the healthiest workloads possible.
The last fifteen years have seen huge increases in developer productivity for several reasons, including the arrival of open source into the mainstream and the ability to better emulate target environments. In addition, the process of resetting a development environment back to the last known stable version has been vastly improved by Vagrant and then Docker.
How do you migrate a production system to Kubernetes with confidence? Lior Mechlovich is an SRE for a cloud platform made up of dozens of microservices spanning 10+ teams and 5+ countries. Migration is difficult and risky. In this talk Lior shares his experience and lessons learned migrating to Kubernetes; how they trained teams, gained visibility, and triple checked each phase of the migration.
Kubernetes has several key differences that push the limits of traditional application monitoring. Due to the distributed ephemeral nature of Kubernetes, most existing solutions fail to give the visibility we might expect, resulting in longer resolution times. Looking at these potential pitfalls can help guide us as we take a fresh look at Kubernetes management and monitoring.