Back to blog results

3월 2, 2016 By Chris Riley

Local Optimization - The Divide Between DevOps and IT

The delivery chain battleground is hardly gone. Developers still want IT to back off, and IT wants to make sure developers don’t break things. In many organizations that have embraced DevOps, the animosity has been drastically reduced, but the delegation of authority still exists — a boundary that both sides of the house protect with their lives.

However, today, with the ability to make infrastructure simpler, and increase visibility without any manual effort, IT can stop optimizing environments locally, and focus only on the big picture.

The average wait time (as of today) for a development server is about two weeks. Obviously with containerization like Docker, this time is reduced to minutes. That’s great for local dev. However, that factor is not compelling enough for most IT departments to take on the added uncertainty in production. This means that the stalemate between developers wanting to own their development boxes and IT wanting to make sure the house does not burn down is live and well. IT knows that the problems can start with local development environments; thus, they want to keep an eye on systems.

The reasons that IT feels the need to optimize locally, down to developer machines, are:

  1. Governance: Even today, developers can get away with not considering governance. But IT cannot. (They would be out of jobs.) Each component of an application is a potential threat (for example, lack of knowledge of how a Docker image was created, or a known vulnerability in an OSS component).
  2. Variables that make it into production: With the rate that developers move, it is easy to envision risky changes that happen on local environments and that might make it to production environments. Humans might transcribe local configurations as production, or do so simply by accident with new and updated scripts.
  3. Consistency: As applications grow more complex, it becomes more difficult to know what is out in the wild. Especially when organizations embrace a microservices model. Consistency is important. As cohorts of infrastructure are laid out, consistency allows for clarity as far as what groups of instances can be trimmed, which pose a risk, etc. And by dealing with them in logical units, there is no need to do discovery or waste time on a machine-by-machine basis. Instead, have referenceable configurations to be aware of and attributable to a collection of machines. IT knows how to do this with their configuration management tools. But developers often do not implement the same practice on their environments (which could also be in the cloud, or as stated above, bleed into production).
  4. Visibility: With developers calling the shots on infrastructure, it is conceivable that IT will not even know what is out there. And this inhibits their ability to confidently validate their system, and also makes it hard to even measure the stability and risk of the system. If they don’t have this, they don’t have anything. IT has to know what is in production.

These concerns will not go away anytime soon. But how they are being addressed is changing dramatically. The good news is that even if a modern development team does not embrace the heterogeneous aspects of DevOps, and segregation of duties is still a practice, then these concerns can all be put to rest. Tools today provide all the relevant data, in real-time, to keep IT up to speed, even if they are not provisioning the systems themselves. In addition to the data, there are alerting and analytics systems that support that data and make it actionable and consumable.

The keys to success and stopping local optimization are:

  1. Have a little faith: Stop being a helicopter parent. It actually ends up being more work for you, and a never ending battle. The reality is that accountability is being distributed across entire development teams. Given the modern day pressures on organizations to be more secure, and have happier users, developers are sweating bugs and security exploits much more. Sometimes I wonder if former Ashley Madison developers will ever work in Dev again. And as developers understand their application is the entire stack, and orchestration is simplified, there is a greater interest to treat infrastructure as part of their application, as code.
  2. Use the tools: Some key tooling makes confidence possible. They are component monitoring, log analysis, and alerting tools. Component monitoring allows for the setting of policies for components before they go out the door. And flagging of components already being used if there’s suddenly an issue like a common vulnerability. Log analysis and alerting go hand and hand, and are often in the same tool — but not just any traditional log analysis will work. The logging tool needs to have the smarts to focus IT on the relevant data at the right time. Reduction of data to its critical parts, and anomaly detection, etc. are aspects of this. Plus, agent-based log analysis supports the same consistency found with the configuration scripts, as the agents are able to log from within, giving visibility into configurations indirectly.
  3. Focus on insights, not prevention: Effort should be spent on deciding what to look at, and how to visualize and obtain useful insights, versus trying to prevent change. Relying on systems to scream out if something is totally out of the norm, and the use of standardized dashboards for common operational elements will provide nearly 100% coverage. It is now more about responding than preventing. Architecting insights also serves as documentation and gives the ability to report to leadership and developers. (You can help coach developers on ways to build in more consistency by example and in a common language and visual way.)

Ceasing to optimize infrastructure locally is not a matter of opinion. It’s a matter of necessity. The pace at which development is moving, the increased complexity of applications (and the fact that full-stack deployments just make sense) requires that IT unlock the server cabinet. But this does not mean their ability to execute on their responsibilities goes away. All the strategies and platforms are there to keep oversight in check, to an even greater extent and with greater efficiency.

About the Author

Chris Riley is a bad coder turned DevOps analyst. His goal is to break the enterprise barrier to modern development. He can be found on Twitter @HoardingInfo.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Chris Riley

Chris Riley

Chris Riley is a bad coder turned DevOps analyst. His goal is to break the enterprise barrier to modern development. He can be found on Twitter @HoardingInfo

More posts by Chris Riley.

People who read this also enjoyed