Legacy system has a persistent negative connotation, reinforcing the idea of a millstone around an organisation’s neck. The typical legacy system was developed many years ago under different business conditions with a more primitive technology — yet it (or more often part of it) is too valuable to the business to get rid of.
Developing another system from the ground up to harmonise with the present ICT framework is typically expensive and time-consuming.
Measuring the value of the old system to the organisation and the cost of its continued maintenance, against the development of something new or the acquisition of a package — which may take time and effort to accommodate to the firm’s needs — is a complex, technical and financial exercise.
The turn of the century saw a wave of big-bang implementations that swept away old mainframe systems, many of them dating from the 1970s.
Yet, there has been a subtle change in thinking through the decade. Legacy is no longer the dirty word it was, even though some in the industry are using a new term — heritage — to describe the old systems you want to keep while retaining the term ‘legacy’ for those that you don’t.
With the aid of middleware and the growing maturity of the service-oriented architecture (SOA) concept, it is increasingly practical to re-use and re-purpose appropriate legacy or heritage elements among computerised business processes to serve the company in the long-term.
“Although the term ’legacy system’ is occasionally used to refer to ageing ERP solutions and other packages, it most typically describes a custom development or heavily customised package, which an organisation now finds difficult to change or expensive to support,” says Greg Davidson, CEO of the systems division of Auckland-based computer services company Datacom.
The company predicts a healthy income stream in service-oriented architectures to enable clients to achieve better use of legacy systems and databases, according to director Steve Matheson. Datacom will also use the techniques internally.
“We continue to strengthen existing ICT systems rather than approach improvements as new IT projects,” says Matheson. “It is a continual challenge for us to keep our systems ahead of the game considering the industry we are in.
“We do a significant amount of work in this area,” he says, “and we believe it is important for companies not to dismiss out of hand the often hidden business value tied up in their older systems.”
Even if the old programs are irredeemably legacy in character and bound for replacement, it can still be worthwhile carrying them over temporarily as part of the strategy for an orderly introduction of new hardware and software, he says.
Rather than implementing a big-bang change in all offices of a company and impose a huge re-learning workload on staff; it often makes sense to bridge the gap with re-clothed legacy software as the new software is gradually introduced one site or one function at a time. Instead of a rip and replace strategy, the old system is gradually peeled away.
“In our experience, almost all large systems can be carried forward in an incremental way that reduces risk and cost. In many cases that is the only practical path,” he says.
“In the vast majority of cases an incremental strategy [of this kind] is going to reduce risk, increase visibility of progress and keep scope manageable,” says Davidson.
“There are a variety of strategies to take legacy functionality that has been developed in commodity languages and expose it, reuse it, migrate it or preserve it, so it does not all need to be replaced at once,” he says. SOA, though a recent favourite “flavour”, is far from being the only strategy, he says. Old applications can interface with the newer ones through message queues, conventional enterprise application integration middleware or ad-hoc interfacing code.
“Typically, designing a gradual migration strategy is a complex, detailed job, so many organisations declare it too hard and risk failure with an oversized rip and replace project, Davidson says.
If the system concerned is based on package software, it is often worth considering whether the full functionality of the legacy software has been explored, says Edge Zarella, KPMG’s partner in charge of global risk management. And many standard applications, in areas where practice barely changes, have been too hastily labelled legacy and in need of replacement, he says.
“If you are running a general ledger and it’s four or five years old, unless the reporting sucks, why would you really want to change it now?”
Zarella recalls one client in this position who was worried about compliance. All the client had to do was simply turn on a number of flags in the system. Once the client realised this, spending $200 million or $300 million on an upgrade was avoided.
“Before you decide that [package] legacy systems are no good and need to be changed, go and check and see if you’ve utilised their full functionality and ask yourself if you’ve trained your people well enough,” he says.
Inland Revenue Department has faced a major task in re-architecting its massive FIRST system to interoperate smoothly with later code and software packages. FIRST is very tightly integrated, says CIO Ross Hughson, so it had become increasingly hard to integrate new services, for example those relating to the department’s growing role in instruments of social policy.
Support for the Kiwisaver scheme now represents another major development load, culminating in a suite of programs that will have to interconnect with FIRST.
“We’re at the very early stages of the FIRST transformation project,” says CIO Ross Hughson. “The system is very mature and stable but it will need to change within the next five to seven years.”
The system is written in Cobol but Inland Revenue has a stable team to maintain it, Hughson says. “Legacy systems are hard to change and the more we add around it the harder it is.”
The software to run Kiwisaver, for example, is being developed to interface with FIRST legacy code through an EAI interface.
The starting point for transforming FIRST is to “go back to first principles and define an enterprise architecture”, Hughson says.
Such redefinition is preferable, in his opinion, to automated translation of code. “That’s a popular approach; to squeeze the Cobol through a sausage machine and produce Java or some other more modern language,” but inefficiencies and needless duplication are likely to be preserved and it does not make the system any more flexible or suited to the future uses expected of it.
The next stage will be a “refactor and re-learn” process; the code and any documentation will be revisited to gain a definitive idea of what part of the programs accomplish what tasks and how, and to shake out any duplicated or unnecessary features that have built up over time.
The system will then be recoded as a series of linked modules designed for greater agility. “We’ve traditionally been a build-our-own organisation but packages may play a part,” says Hughson. Some of the legacy code could well still be preserved, inside appropriate SOA wrappers or with EAI interfaces to other parts of the system, “but we haven’t got the legacy retention mapped out. First we want to get the business view of the system finalised.”
The new system will definitely have an SOA orientation, says Hughson, working with Sun’s Jcaps middleware.
The department has recently been doing proof-of-concept exercises with a number of vendors, to establish how their products can help the FIRST transformation. Some experimental code conversion has also been done.
Replacements at Fonterra
Giant dairy company Fonterra inherited a good deal of legacy code from the merger of its predecessor organisations. Most of this was replaced in the massive project named Jedi. But while this was being done, the staples of the company’s income had to stay intact. This meant the manufacturing system, based on Oracle’s GEMS, and the warehouse management system, were kept in their original form.
“We’re relatively fortunate because they’re quite separable applications and we don’t have a high user demand for change,” says CIO Greg James.
To minimise the risk profile of keeping the legacy applications alive, Fonterra upgrades regularly so that it always has the latest, fully-supported release.
“There are a lot of issues like that,” says James. “We had applications sitting on old [Windows] NT platforms and we’ve been porting them onto more recent Windows application servers or Sun servers.” But the company is considering future options for when that road peters out.
“We run a very formal business case around any replacement, evaluating our investment in it, using standard depreciation, and the cost of support and maintenance over its expected lifetime,” James says.
Over time, Fonterra has shrunk its portfolio of applications. “We used to have thousands and thousands of applications internationally; now we have a global footprint based on SAP.”
In the vendor community, few organisations would have had as much experience with issues around legacy systems as IT services company EDS. Anita Paul, EDS Asia-Pacific application modernisation services leader, is involved in projects for clients she calls “legacy transformation”.
She describes her work as being about taking a 40-year-old environment, unpicking it, finding the best way to use what you have already got and putting the best of the new on top of it. This should lead to the creation of the most effective IT environment that utilises all of a company’s assets and gives the best business value back to the users.
She points out that the inherent value comes from what is inside the legacy systems, rather than how they operate and what platforms they are sitting on.
Paul observes all organisations have different issues, so success comes down to governance and applying a rigorous analysis to the IT assets.
“One of the biggest issues I find is the integration clients have caused inside the legacy mainframes,” she says. “Often they have hard coded everything and things are so deeply coupled that it is very costly for them to break up. There are also cases where there are disparate applications. Then it is not so much a cost issue but a loss of data and business rules.”
A good starting point, according to Paul, is to get facts and figures and check onthe legacy applications that are truly offering business value. Then you can start rating the case for change for particular applications that are adding business value against those that are not.
Managing the legacy portfolio
By Andrew Rowsell-Jones
“A legacy system is a hindrance that fills a business need, so you can’t just get rid of it,” says a contributor to a recent Gartner EXP research on legacy systems.
The sad truth is most of today’s applications will end up as tomorrow’s legacy systems. The good news is that not all legacy systems are equal in terms of the difficult choices they represent to the business. Systems with low risk and low value can easily be tolerated, or eliminated when they can’t. High-risk, low-value systems too can easily be eliminated. Low-risk, high-value systems are what every CIO and business executive wants in the application portfolio. It’s the high-risk, high-value ones that are the problem.
To start the process of addressing your legacy systems, you have to know as much about them as possible. The most valuable tool to aid in this process is the IT asset management (ITAM) repository, which tracks and cross-references procurement, contract, inventory, maintenance, entit-lement management and retirement infor-mation for the software and hardware that a company owns.
Once you have them listed, now deduce today’s business risks: the likelihood and potential business impact of a specific application failure, from the inability to support business requirements to a catastrophic shutdown of the business process.
The other half of the story involves determining business value. The best way to do this is to take a process-by-process approach that looks at the business value delivered by each of a business’s key processes and then apportions this value to the application, or applications, that support it. Overlaying the value and risk of a given application ultimately helps show when and why the business must migrate from its legacy systems, and how much time remains to do so or reduce the risk to acceptable levels. But the business case can’t be completed until the CIO knows how to migrate a particular system.
CIOs have multiple options for migration. From live with it, to using newer tools and technologies to add missing functionality while leaving the core in place, to a complete replacement, or some combination of all three.
The key goal is to avoid digging a new legacy-systems hole in an attempt to fix the old one. In other words, migration should enhance business value while reducing risk. To this end, aligning plans and designs with architecture standards helps to future-proof the new environment. Which brings us to the final step for the CIO: put in place mechanisms to reduce the chances that high-risk systems will return to the portfolio.
Stop risk creeping back in by following a couple of simple steps.
Conduct annual reviews of your application portfolio — and do them following business strategy discussions and architecture planning, and before budgeting and project planning. This allows the CIO to track changes in value and risk over time, build consensus for change gradually, and spot trends in value and risk early enough to avoid sudden surprises and disruptions to the business.
Look at total cost of ownership over the lifetime of a system, not just the up-front cost. The best time to discuss TCO is when the business case for a system is proposed. That’s also when executives should consider a timeline for acquisition, operation and eventual retirement of the system.
Focus on high-value, high-risk systems, assessing the application portfolio regularly and migrating selectively to reduce risk and enhance value.
By looking forward via architecture and ongoing reinvestment, CIOs can significantly reduce the burden of legacy systems.
The payoff from this active management of legacy systems is twofold: A business that is less constrained by accidents and history, and a more satisfying role for the CIO and the IS team.
Andrew Rowsell-Jones is vice president and research director for Gartner’s CIO Executive Programs.
Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.