Menu
Menu
Supply links in ‘real time’

Supply links in ‘real time’

Major IT vendors are attempting to hang their futures on selling a new idea while the major research firms are helping the push. But is the hype justified?

You may have heard some vendors talk about e-business on demand, adaptive infrastructure or agile business. Gartner calls it “the real time enterprise”, Forrester dubs it “organic IT”. The vendors are treating this as the great hope for their collective salvation from the doldrums of the tech bust. The questions are, though, what is this idea and what is it useful for? It turns out to be a fairly slippery concept to define or communicate to customers. On first hearing a phrase like e-business on demand, you could be forgiven for thinking the vendor is re-badging the old timeshare computing idea from the 1960s. There are two main ideas at work here. Firstly, there is the concept of making your data centre more responsive – then there is the idea of using IT to make your organisation more responsive to the demands of your business environment.

Framework required

In examining an organisation’s need for some kind of implementation of these ideas, we need a theoretical framework from which to hang the different offerings. At root, there are two main reasons to buy an IT product. Firstly, almost all organisations need desktops, printers, an email connection, a file server and some networking to make systems talk to each other.

For present purposes, we will call this the IT infrastructure of the organisation. That is, the basic hardware and software providing the non-strategic IT services to the organisation. To borrow a leaf from Nicholas G. Carr’s controversial Harvard Business Review article, ‘IT Doesn’t Matter’, your organisation is not going to get any long-term strategic benefit from doing this any better than your competitors. Essentially, all you can do is to get the best service for the lowest price.

Secondly, there is the technology that can enable a fundamental change to the processes of the business and how the business comes to market. This is the possible strategic benefit of IT. The classic examples of this are Dell, Amazon and eBay. It is possible to be shortsighted and simply claim that without the internet these organisations would not exist, which is true but misses the point.

The key to Dell and Amazon.com’s success is not simply that they are on the internet but that they have used the information in their organisation to profile their market and to build a superb supply chain to supply those customers quickly and cheaply.

Carr’s mistake was to miss this split between strategic and non-strategic IT, yet he was mostly right. An organisation’s IT infrastructure should run as if provided by a utility, as every company has information technology.

However, transporting large amounts of information quickly – and processing the same information even faster (the fundamental advantage of IT) – has the potential to fundamentally reinvent a supply chain in order to reduce inventory and increase responsiveness.

Trends in operations management

The easiest way to put this larger idea into context is by analogy with the trends in operations management: Kanban (PC-based calculation software that claims to manage spikes in demand), vendor-managed inventory, enterprise resource planning (ERP), supply chain management, and so on. Around half of the new trends in managing operations in the last two decades or so have been focused on lowering inventory. This is for the following main reasons:

1. Holding inventory involves costs – both in the space and management of it as well as in the capital held up in the goods being stored. Often, inventory levels are high to cover for a lack of understanding of the supply chain to lower the levels. The inventory is being used as a buffer against the uncertainty of the supply chain, because the people managing it do not have the information or analytical tools to be able to predict demand adequately. Certainly some inventory is needed as a buffer, but the more you understand about the demand patterns, the less inventory is needed.

2. Too much inventory decreases the responsiveness of the supply chain. If you’ve taken a punt and decided red sweaters will be hot this season, but your customers decide black jackets are the way to go, you have a large degree of inertia in your supply chain made up of the actual inventory, the configuration of plant and the organisation of your transport logistics. This must all be changed before you can start producing black jackets.

The way around this problem is to increase the communication between the various partners along the supply chain. If the suppliers at the far end of the supply chain know what the end-user is buying and have sufficient information, they can predict what will be needed by the supply chain and immediately set about getting ready for it before the downstream firms in the supply chain actually require it. Implemented across the supply chain, this drops the inventory required.

However, transforming your supply chain to be more responsive to the market is not something you can pick up and get running in a weekend. In fact, if it were that easy it wouldn’t be useful, because it wouldn’t be a long-term source of strategic advantage. The exact configuration of your supply chain is personal to your business. In most businesses this is the source of your strategic advantage.

Autonomic

HP, Sun and Microsoft are working on initiatives we will call autonomic utility infrastructure (this is Forrester’s “organic IT”) but IBM, presumably looking for a way to make the most of its acquisition of PWC Consulting a year ago, is also selling the consultative service of designing the architecture to allow the enterprise to be responsive to the market’s demands (Gartner’s “real time enterprise”).

So, what features of an infrastructure would make it truly autonomic? Firstly, the infrastructure should be reliable, so it should continue to provide services even when one or more of its elements are offline. It should have a large administrative span of control. With current technology, talented and expensive IT staff members have to repeat far too many mundane steps to manage a server. This means re-tasking of elements should be automatic where possible. It should be efficient in its use of resources. The infrastructure should be able to use all computing, storage and network resources available to it to provide the services required of it. Each addition of new equipment should be able to give a marginal improvement to the service quality. That is, if running the end-of-month financials requires more compute cycles than the financial application normally needs, the infrastructure should provide those compute cycles automatically and seamlessly to the application.

The difficulties of enabling hardware and software in an infrastructure with the features mentioned above should not be underestimated. The vendors who try this face a significant challenge.

Firstly, there is the problem of stability. Most data centres are filled with single-purpose servers. This is mostly because of bitter experience with the behaviour of Windows when one application either goes catatonic or crashes completely. All too often, anything else running on that box is affected until the entire box is rebooted. An autonomic infrastructure needs to smoothly handle a failure in one software component, either by successfully isolating the crash or by migrating the still-running components to another box while rebooting the first one.

Then there is the problem of the marginal use of compute cycles. Ideally, you want a system where a running application can be split into many parallel parts that can be run on any available processor and each part should be able to be migrated to another processor at will. If this can be achieved, each component part of all applications can be assigned to any processor in the data centre in order to achieve the desired service levels.

Unfortunately, anybody who has done any significant multi-threaded programming knows how difficult this is to achieve in practice.

While it sounds as though any compute cycle from any processor is good enough for a program, the problem comes when two different parts want access to the same data or another resource. Unless the programmer is very careful in such circumstances, and experienced enough to know the pitfalls, you will either corrupt your data or cause the whole application to stop.

Lastly, be careful which vendor’s technology you choose when implementing an autonomic infrastructure. The most important aim should be interoperability. Sun is going to try and sell you big Sparc boxes or lots of Linux boxes. It comes as no surprise the first product in Sun N1 is a product for the fast provisioning of Blade servers running Sparc Solaris. Microsoft will try to tie everything they sell you into .Net.

Ideally, you want a provider that uses open protocols for any communication, so you can use other products at different parts of the architecture should you want to. This becomes important when it is time for greater integration.

Edward Sargisson is an IT consultant. He graduated from the University of Auckland with a conjoint BSc/Bcom.

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.
Show Comments