Build a better software delivery service

Build a better software delivery service

Having problems with software release management? The following case study identifies seven key practices that will help ensure that your next software project will be a success.

A major U.K. telecommunications provider had a problem. It needed to implement a business critical supplier switch, and this in turn required the firm to reengineer its billing and account management systems. These systems had to be in place within three months, otherwise the organization risked losing hundreds of millions of pounds and a decline in its stock value. But the firm's development processes were poor, and its release management was extremely problematic and inconsistent.

The company brought us in to help deliver the software within the time constraints and to turn around a failing release management process. Within three months, we'd released both the pending releases and two scheduled releases of the reengineered applications. Most important, we established a straightforward and lightweight release management process to ensure that future releases would happen on time and to the required quality. Follow along as we show you how we did it -- including the mistakes we made.


You can't begin to fix something without understanding what it is, and how and where it is broken. Our first step in improving our client's release management system was to form a detailed picture of the current release process. We began with a number of walk-through sessions with key individuals involved in the software process.

From these sessions we determined that our starting point was pretty bad. When we joined the project, there was software still waiting to be released two months after being completed. Test environments were limited and not managed, so they were regularly out of date and could not be used. Worse still, it took a relatively long time to turn around new environments and to refresh existing ones.

When we arrived on the scene, regression testing was taking up to three months to manually execute. It was usually dropped, significantly reducing the quality of any software that made it to release. Overall, morale and commitment were very low. These people had never been helped to deliver great software regularly, and it had worn them down.


Once we got a picture of the current state of the process, we set about establishing a regular release cycle.

If the engineering team is the heart of the project, the release cycle is its heartbeat. In determining how often to release into production, we had to understand how much nonfunctional testing was needed and how long it would take. This project required regression, performance and integration testing.

Establishing a release cycle is vital for several reasons. It creates an opportunity to meaningfully discuss nonfunctional testing that the software may need. It announces a timetable for when stakeholders can expect to get some functionality; if they know that functionality will be regularly released, they can get on with agreeing what that functionality will be. It creates a routine with which all teams can align (including marketing and engineering). And it gives customers confidence that they can order something and it will be delivered. Your release cycle must be as accurate as you can make it, not some pie-in-the-sky number that you made up during lunch. Before you announce it, test it out. There is nothing worse for a failing release process than more unrealistic dates!

We started out by suggesting a weekly cycle. That plan proved unfeasible; the client's database environment could not be refreshed quickly enough. Then we tried two-week cycles. There were no immediate objections from the participants, but it failed the first two times. In the end, two weeks was an achievable cycle, once we overcame some environment turnaround bottlenecks and automated some of the tests.

Finally we established a cycle whereby, every two weeks, production-ready code from the engineering team was put into system test. Then two weeks later, we released that code into production. Remember: Your release cycle is not about when your customer wants the release. It's about when you can deliver it to the desired level of quality. Our customers supported our release cycle because we engaged them in determining the cycle. Theirs is only one consideration in determining the release regularity.


If there is one single guiding principle in engineering (or reengineering) a process, it is to do a little bit, review your results, and then do some more. Repeat this cyclic approach until you get the results you want.

Lightweight processes are those that do not require lengthy bureaucratic approvals or endless meetings to get agreement. They usually require only the minimum acceptable level of inputs and outputs. What they lack in bulk and bureaucracy, they make up for in response to change and popular adoption.

Underpinning this approach is the thorny issue of documentation. You need to record what you did and how you did it. Otherwise, what do you review and how do you improve? We don't mean the kind of documentation that endangers rain forests and puts its readers to sleep. We mean documentation that people (technical and otherwise) can read and act on.

The engineering team chose Confluence, a commercial tool, to collaboratively document their work. They used the software to create minimal but effective documentation of what they were agreeing to build in every cycle of work. They recorded what they built, how they built it and what was required to make it work. We saw the value in this approach and rolled it out (both the approach and the tool) to everyone else involved in the process.

Initially, we suggested a sequence of tasks to release the software we got from the engineering teams. It covered how we took delivery from the source control management system; what packages would be called and how each element (executable code, database scripts, etc.) would be run and on which platforms. Then we did a dry run, using dummy code for each element. We tested our sequence, documenting what we did as we did it. This formed the basis of the installation instructions.

The next step was to get the people who would be deploying the real release to walk through another dry run, using only our documentation. They extended, amended and improved our instructions as they went through. The process became a more inclusive one where everyone contributed to the documentation; since they'd been part of its definition, the process became more widely adopted with better quality.

After each release, we reviewed the process. We examined the documentation and identified changes made during the release. Every time, we looked at how the documentation could be improved and fed the enhancements back into the process.


Your release infrastructure is anything that needs to be in place to deploy the software and to enable users to use it. Your obligation to the customer is not just that you build great software; it is that it's available for them to access and use.

Crucial to getting a good release process is figuring out what you need to have in place to make it available to the customer before the engineering team is done building the software.

The release infrastructure covers the hardware, storage, network connections, bandwidth, software licenses, user profiles and access permissions. Human services and skills are part of the release infrastructure, too. For example, if you require specialist software installed and configured, it's not smart to exclude the availability or cost of getting such skills into your infrastructure plan.

It is critical that you discover, as early as you can, hidden bottlenecks in procuring the required hardware or the missing skills (say, to configure secure networks). You need to resolve them before they hold up your delivery.

This isn't trivial. We strove to get our release infrastructure in place as soon as we started on the project. Even after six weeks' lead time, we were still waiting on specialist memory and hard drives for the test servers.


Automation enables you to do repetitive tasks without tying up valuable human resources. Standardizing ensures that your automation's inputs and outputs are consistent every time. Prior to our involvement with the project, the engineering teams manually crafted a deployable package. A new package was not guaranteed to be the same as the last one; in fact, it was not even guaranteed to be the software they had been building, much less guaranteed to work. It often took the tech staff days to create a package with the features they were delivering in a structure that could be deployed.

We immediately drew up structure and acceptance criteria for the deployable package the team was delivering to us and helped them standardize its packaging. This triggered the implementation of automated processes to build the software in that consistent structure for every release point.

Suddenly, the packaging of the software for release was not even an issue. Because we had automated the verification of the acceptance criteria -- for example, that code must be unit tested prior to delivery and test deployed to ensure that it could be deployed -- we had guaranteed its executability. As a result, we were able to package, version, and test and deploy finished code with a single command in a very short time.

But automation did not stop there. With each development cycle, we had even more regression tests to do. The existing regression tests would have taken three months to manually execute; as a result, the releases were never properly tested. Our newly established release cycle meant that a release had to be regression, performance and integration tested in two weeks for us to be able to release it into production. We could overcome the different types of testing (integration and performance) by having separate environments for each type. But how would we accommodate three months of regression tests into a two-week window? First, we initiated a prioritization exercise. The customer identified the highest-priority regression tests -- the minimum they would accept as proof that the old functionality still worked. Then we set about automating this set. Subsequent acceptance tests also became automated, ensuring that we could regression test every release in hours rather than days.


If getting software released is important to you, don't keep it a secret. Our teams improved their commitment to deliver the software release when they knew it was important.

We backed up this importance by establishing that the designated release manager would expect the software to be ready when the teams agreed it would be ready. We got the program manager (who effectively was our customer) to explain to the teams why the release was important. (Ultimately it boiled down to losing millions of pounds!)

We requested that the software delivered by the engineering teams conform to a standard (versioned, tested, documented and packaged); we established that we would request this standard package for every release cycle. We needed to explain why we wanted the software in this way (it made our automated process easier and more consistent) and we integrated the team's feedback into the process.

Establishing positive expectation is a really good way to empower everyone involved in the process. We were not given any executive authority, so there was no fear of sanction or sacking. Instead, we tapped into the power of positive expectation to get people on board to help us improve the release process. We had individuals making key decisions (which they never felt able to make before) because "Mike and Tym need this software by Thursday and we said we would deliver it."


No matter how much you spend on hardware, software and fancy processes, without the commitment of team members you will not enjoy sustainable success in releasing your software. Heck, you may not even end up with any software to release.

You probably thought we were going to talk about getting the right people and rewarding them well, or that we would harp on about the tools and skills teams need to do their jobs. The truth is that you know you should get the right people for your teams (the definition of "right" is different from business to business), you should reward them adequately for the value they deliver and yes, you should ensure they have the tools and skills they need.

Our basic assumption is that people are inherently interested in doing good work. If you want the people in your teams to care about your product and about doing a good job, you have to first demonstrate that you care about what is important to them. From the outset of the project, we formed excellent rapport with everyone on the teams, based on mutual respect and understanding. We demonstrated that we were flexible about personal challenges and we did whatever we could to help. Whether this was buying lunch, fetching drinks, organizing training and advice, listening to problems or playing devil's advocate, we did whatever was needed to make each person feel valued as a critical part of the process.

When we came to the project we found a general sense of apathy. Some longer-term permanent employees were simply waiting for the redundancy package; others were never asked to do anything because they had never done anything right. It took a lot of relationship building and investment of time and positive affirmation to get many people back to a point where they cared about delivering personal value to the process.

Release management is a really important part of any software project and is not often given the attention it deserves. These are the seven most important success factors for us in this particular case, though we suspect that they are pretty good ideas for any case.

Good release management takes hard work, resolve and great communication; however, the greatest skill is the ability to review, learn and adapt improvements. Good luck!

Mike Sutton works through Wizewerx Ltd. as an independent IT consultant specializing in high-end Java development solutions, agile coaching, and mentoring and has worked for blue-chip companies in the U.K. and Europe.

Tym Moore is a U.K.-based information services consultant and an owner of Unboxed Consulting, London, England, which specializes in Agile Development.

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.
Show Comments