Mutable server infrastructure means the server infrastructure will be continually updated, tweaked, and tuned to meet the ongoing needs of its purpose. It extends to every server and switch that is unique.
- Continuous delivery is built on the concept of versioning and automation to push deployments to variable environments on its pipeline.
- Virtualization (both software and hardware) across networking, servers and storage is the primary technology that makes immutable server infrastructure possible at any scale.
- Infrastructure-as-code is the ideal way to create immutable server infrastructure.
Traditional mutable infrastructures originally developed when the use of physical servers dictated what was possible in their management and continued to develop as technology improved over time. The benefits of an immutable infrastructure include more consistency and reliability in your infrastructure and a simpler, more predictable deployment process. It mitigates or entirely prevents issues common in mutable infrastructures, like configuration drift and snowflake servers.
In contrast, immutable infrastructures were designed from the start to rely on virtualization-based technologies for fast provisioning of architecture components, like cloud computing's virtual servers. The speed and low cost of creating new virtual servers make the immutable server infrastructure, or immutable infrastructure paradigm, practical.
The benefits of an immutable infrastructure include more consistency and reliability in your infrastructure and a simpler, more predictable deployment process. It mitigates or entirely prevents issues common in mutable infrastructures, like configuration drift and snowflake servers.
The concept of immutable server infrastructure is to build the server infrastructure components to exact specifications. No deviation, no changes. It is what it is. When a change to a specification is required, a whole new set of server infrastructure is provisioned based on the updated requirements, and the previous server infrastructure is taken out of service as it is obsolete.
Virtualization (both software and hardware) across networking, servers and storage is the primary technology that makes immutable infrastructure possible at any scale. Virtualization is at the core of the modern data center and makes cloud computing possible. Provisioning and retiring physical hardware to accommodate every change is cost and time prohibitive. That is why mutable infrastructure has been the norm in all but the biggest companies until very recently, when virtualization became commonplace. Containers (ex: Docker) are the newest trend in the immutable infrastructure space and are simply another virtualization layer.
What is the best way to make an object reproducible? There are three basic steps:
- Document how to create the object.
- Create scripts that will build and assemble the components into the object as described in the documentation.
- Automate the process.
Track each version of the documentation and script through version control to record changes.
DevOps is an overarching term that includes the culture and tools that strive towards agile development, with continuous delivery as the Holy Grail.
The core philosophy of continuous delivery is to deploy a package and its dependencies the same way every time, so there is no doubt that the environment is the same. The dependencies are where immutable infrastructure comes into play. The infrastructure build, scripted and part of the package, eliminates the biggest source of problems during deployments.
Continuous delivery is built on the concept of versioning and automation to push deployments to variable environments on its pipeline.
Continuous delivery uses automation with embedded testing to make deployments so routine they become mundane.
Immutable infrastructure as the underpinning component of the currently running application version makes operations much easier. If there is a problem, rebuild that instance. If the load increases, spin up a couple of extra instances without thinking about it.
Need to include a security patch to one or more components in the infrastructure? Deploying it to the existing running production instances causes change, change increases risk, and people love to validate changes in production manually. Take advantage of the fact that a continuous delivery pipeline creates immutable infrastructure. Update the scripts, push them into version control, and let the pipeline worry about deployments and testing. Following the same steps every time, with as much automation as possible, new instances come online, and old unsecured instances disappear.
Let's walk through a simple scenario showing how simple immutable infrastructure can be. A code check-in can trigger it, which is the first step toward having a continuous delivery pipeline.
Deploying a simple PHP integration on Heroku
Heroku is a developer-friendly platform for deploying applications. It is an easy first step to immutable infrastructure. With every application you create, you pick a runtime version used until the system has to retire it, usually for support or security reasons.
Steps to build and deploy an application called yet-another-test-app:
Step 1 – Create the Integration
First, let's create a simple application that prints the environment information: $ mkdir yet-another-test-app $ cd yet-another-test-app $ echo "# yet-another-test-app">> README.md $ echo '' > index.php $ composer require "php:^5.6|^7.0"
Step 2 – Enable version control
$ git init $ git add . $ git commit -m "first commit"
Step 3 – Select the web server
Now, we set the type of engine we run in on Heroku: $ echo "web: vendor/bin/heroku-php-apache2"> Procfile
Step 4 – Create a repository
Now it's time to create a server-side repository accessible to team members and Heroku.
First, create an account on GitHub and a public repository (so it is free). Next, push the local git repository to GitHub : $ git remote add origin https://github.com/vincepower/... $ git push -u origin master
Step 5 – Deploy to Heroku
Create an account on Heroku.com and follow the wizard to create an integration connected to the repository you created on GitHub. You get one dyno (web runtime) for free. Once you have connected it to your GitHub repository, there are two choices. The first is to enable "Automatic Deploys," which will redeploy the application anytime there is a commit on the GitHub repository master branch.
The second option is "Manual Deploy," a one-time deployment, to take advantage of Heroku's immutable infrastructure.
Note: Heroku has an option for a continuous delivery pipeline that is simple and easy to enable and allows additional steps like reviews and a staging environment before production. This guide assumes that you need to enable that feature.
Step 6 – Run the Integration
After deployment, the application becomes available on this website. With this step complete, you have now taken advantage of immutable infrastructure created through infrastructure-as-code as part of the continuous delivery spectrum.
Sumo Logic is an industry-leading solution that enables IT organizations to engage in more efficient infrastructure management. With Sumo Logic, IT organizations can aggregate data in the form of log files from applications and machines across the network, visualize that data in real-time dashboards and use it to drive infrastructure management decisions. Sumo Logic empowers IT organizations to enhance their infrastructure management processes and procedures:
- Assessing network activity to inform resource allocation
- Discovering user behavior trends that inform operational decisions
- Providing real-time feedback on the health of virtual and physical assets a
Complete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.