Jez Humble is the build and release principal for ThoughtWorks Studios and product manager for ThoughtWorks' release management and continuous integration tool Cruise. He's co-written a book that will hit shelves this year. In this interview, Jez talks about continuous delivery, collaboration obstacles, and the value of defining risk.
Sticky ToolLook: You've co-written (with David Farley) a book, Continuous Delivery: A Handbook for Building, Deploying, Testing, and Releasing Software, that comes out later this year. Would you tell us about it?
Jez Humble: When helping organizations deliver software, one of the most serious problems we encounter is that organizations find it hard to get software from dev complete to release. Often, people find their software is not fit for use when they try to deploy it into a production-like environment; perhaps it doesn't meet capacity requirements or can't even be deployed. We see multi-week, unplanned stabilization and rearchitecture phases, and painful deployment processes that mean weekends spent at work. The thesis of the book is that this is unnecessary. The solution is automation of the build, deploy, test, and release processes; automation of infrastructure, environment, and database management; and processes that enables not only incremental development but also incremental, continuous deployment.
The book sets out all these techniques, including the principles and practices behind them. It's a big book, because one of our strong beliefs is that making software delivery less painful requires a holistic approach that starts with project inception and extends all the way through operations and support. So, we cover the whole gamut from configuration management to automated testing (including unit, acceptance, and capacity testing) and even how to componentize your applications and patterns for using version control.
Deployments should be a frequent, low-risk, entirely automated, push-button process, and we have been able to achieve this on enough projects (including some very large ones) that we are confident that any organization willing to try can do it, too. The key pattern that enables push-button deployments is the deployment pipeline, which forms the core of the book. The deployment pipeline is an extension of continuous integration that allows you to deploy any build of your choice into any of your environments and trace back from what's in production (or any other environment) to the revision that it came from in version control. By creating a system that implements this pattern, you create a pull system where everybody-developers, testers, and operations-can self-service the operations they require, seeing exactly which versions of the software are available for them to deploy and what stages in the delivery process they have successfully been through. This removes many of the inefficiencies involved in delivering software, such as testers waiting for new builds to be deployed, frustration in operations because they've had undeployable code thrown over the wall, and developers who can't get rapid feedback on the production readiness of their code.
Getting to a position where you are able to frequently release new versions of your application requires discipline-a comprehensive automated testing strategy, strong configuration management of code, infrastructure, environments and databases, and an incremental approach to development-but the benefits are enormous. Using the techniques we describe, you can deliver valuable, high quality software to users in days or weeks rather than months, and you can release changes in a matter of hours or even minutes (on small projects), while improving your operations team's ability to meet their SLAs. Everybody wins!
STL: What are some of the obstacles to collaboration in the deployment pipeline?
JH: The main thing is getting everybody on board. Unless your organization is pretty mature in its capabilities, everyone involved in delivery will have to change their way of doing things to some extent. Operations people, testers, and developers are often quite distrustful of each other, usually for good reason: developers tend insufficiently to consider the impact their changes will have on the stability of production environments, and operations people are loath to spend their time working on something that will result in more changes when they are already swamped. However, we have tried to make it clear that it doesn't have to be a zero-sum game: if people release more frequently and do so from the beginning of projects, the deltas will be very small, the deployment process will have been tested much more frequently, and everybody will have much more feedback on the production readiness of their application from the beginning of development, so the risk of each individual deployment becomes much, much lower.
STL: What is the value of being able to define/quantify risk when delivering software, and what are some techniques or tools for doing so?
JH: Understanding what contributes to risk in your delivery process is what allows you to mitigate them. If you can reduce risks, you can save money. In particular, the main risks we address in the book are that of doing unplanned work when you discover your software is not fit for use and the risk of complex, manual deployment processes that can lead to panic, roll-back, late nights, and downtime. There's also an opportunity cost for IT from having to make your software deployable when you could instead be working on new features and, of course, an opportunity cost to the business from not having software live because it takes weeks to release a new version.
Really, the techniques all rely on the same basic principle-if it hurts, do it more often and bring the pain forward. Is it painful to deploy? Then deploy continuously from the beginning of your project, and deploy the same binaries using the same deployment process to every environment so you've tested the deployment process hundreds of times by the time you come to deploy to production. Is it hard to make high quality software? Then build quality in by testing throughout the project, which includes automated unit tests, acceptance tests, component tests, and integration tests, along with manual testing such as exploratory testing, showcases, and usability testing.
In terms of tools, there are really three kinds of tools you need: a good version control system (such as Subversion, Perforce, Git or Mercurial), some effective testing tools (such as xUnit for unit and component tests), a tool for automated acceptance tests (such as Twist, Cucumber, or Concordion along with WebDriver, Sahi, or White), and a continuous integration and release management tool (such as Cruise, TeamCity, or Hudson).
While I am biased because I am the product manager, Cruise (the commercial tool from ThoughtWorks Studios) is designed from the ground up to enable you to implement deployment pipelines. The forthcoming 2.0 version includes powerful features such as the ability to model your environments so you can deploy any version of any application to any environment and manage multiple services that share the same environment (e.g., integration testing with a SOA), and the ability to distribute your tests on the build grid and have Cruise tell you which tests failed, and (if some tests have been failing for a while) which check-in broke each test and who was responsible.