Menu
Menu
The case against cloud computing

The case against cloud computing

There are five key impediments to enterprise adoption of cloud computing. With a risk management framework in place, appropriate decisions can be made - and justified.

I've had a series of interesting conversations with people involved in cloud computing who, paradoxically, maintain that cloud computing is-at least today-inappropriate for enterprises. I say paradoxically because each of them works for or represents a large technology company's cloud computing efforts, and one would think their role would motivate them to strongly advocate cloud adoption. So why the tepid enthusiasm? For a couple of them, cloud computing functionality is really not ready for prime time use by enterprises. For others, cloud computing is too ambiguous a term for enterprises to really understand what it means. For yet others, cloud computing doesn't-and may never-offer the necessary functional factors that enterprise IT requires. While I think the observations they've made are trenchant, I'm not sure I'm convinced by them as immutable problems that cannot be addressed.

I thought it would be worthwhile to summarise the discussions and identify and discuss each putative shortcoming. I've distilled their reservations and present them here. I've also added my commentary on each issue, noting a different interpretation of the issue that perhaps sheds a little less dramatic light upon it and identifies ways to mitigate the issue.

There are five key impediments to enterprise adoption of cloud computing, according to my conversations. I will discuss each in a separate posting for reasons of length. The five key impediments are:

  • Current enterprise apps can't be migrated conveniently
  • Risk: Legal, regulatory, and business
  • Difficulty of managing cloud applications
  • Lack of SLA
  • Lack of cost advantage for cloud computing

Current enterprise apps can't be migrated conveniently. Each of the major cloud providers (Amazon Web Services, salesforce force, Google App Engine, and Microsoft Azure) imposes an architecture dissimilar to the common architectures of enterprise apps.

Amazon Web Services offers the most flexibility in this regard because it provisions an "empty" image that you can put anything into, but nevertheless, applications cannot be easily moved due to its idiosyncratic storage framework, meaning they can't be easily migrated.

Salesforce.com is a development platform tied to a proprietary architecture deeply integrated with salesforce.com and unlike anything in a regular enterprise application. Google App is a python-based set of application services-fine if your application is written in python and tuned to the Google application services, but enterprise applications, even those written in python, are not already architected for this framework. Azure is a .NET-based architecture that offers services based on the existing Microsoft development framework, but it doesn't offer regular SQL RDBMS storage, thereby requiring a different application architecture, thus making it difficult to migrate existing enterprise applications to the environment.

According to one person I spoke with, migrating applications out of internal data centers and into the cloud is the key interest driver for clouds among enterprises; once they find out how difficult it is to move an application to an external cloud, their enthusiasm dwindles.

I would say that this is certainly a challenge for enterprises, since if it was easy to move applications into cloud environments, quick uptake would certainly be aided. And the motivation for some of the cloud providers to deliver their cloud offerings in the way they do is difficult to understand. Google's commitment to Python is a bit odd, since Python is by no means the most popular scripting language around. Google sometimes seems to decide something is technically superior and then to insist on it, despite evidence that it retards adoption. With regard to Salesforce, I can certainly understand someone with a commitment to the company's main offering deciding to leverage the force architecture to create add-ons, but it's unlikely that an existing app could be moved to force.com with any reasonable level of effort; certainly questions about proprietary lock-in would be present for any enterprise that might entertain writing a fresh app for the platform. It's quite surprising that Microsoft would not make it easy for users to deploy the same application locally or in Azure; while the Azure architecture enables many sophisticated applications, the lack of ability to easily migrate will dissuade many of Microsoft users from exploring the use of Azure.

On the other hand, a different architecture than the now-accepted enterprise application architecture (leaving aside that current enterprise architectures are by no means fastened upon one alternative, so it's not as though the choice were between one universally adopted enterprise architecture and a set of dissimilar ones) doesn't necessarily mean that it is deficient or even too difficult to migrate an application to. It might be more appropriate to say that there is a degree of friction in migrating an existing application; that degree varies according to which target cloud offering one desires to migrate to.

Certainly it seems well within technical capability for someone to develop a P2C (physical to cloud) migration tool that could all or much of the technical effort necessary for migration; of course, this tool would need to be able to translate to several different cloud architectures.

Even if an automated tool does not become available, there is the potential for service providers to spring up to perform migration services efficiently and inexpensively.

Naturally, performing this migration would not be free; either software must be purchased or services paid for. The point is that this is not an insurmountable problem. It is a well-bounded one.

The more likely challenge regarding clouds imposing a different architecture is that of employee skills. Getting technical personnel up to speed on the requirements of cloud computing with respect to architecture, implementation, and operation is difficult: it is a fact that human capital is the most difficult kind to upgrade. However, cloud computing represents a new computing platform, and IT organizations have lived through platform transitions a number of times in the past. In these times of Windows developers being a dime a dozen, it's easy to forget that at one time, Windows NT skills were as difficult to locate as a needle in a haystack.

On balance, the lack of a convenient migration path for existing applications is going to hinder cloud computing adoption, but doesn't represent a permanent barrier.

Cloud Computing Imposes Legal, Regulatory, and Business Risk

Most companies operate under risk constraints. For example, US publicly traded companies have SOX disclosure legal requirements regarding their financial statements. Depending upon the industry a company is in, there may be industry-specific laws and regulations. In healthcare, there are HIPAA constraints regarding privacy of data. There are other, more general requirements for data handling that require ability to track changes, establish audit trails of changes, etc., particularly in litigation circumstances. In other nations, customer data must be handled very carefully due to national privacy requirements. For example, certain European nations mandate that information must be kept within the borders of the nation; it is not acceptable to store it in another location, whether paper- or data-stored.

Turning to business risk, the issues are more related to operational control and certainty of policy adherence. Some companies would be very reluctant to have their ongoing operations out of their direct control, so they may insist on running their applications on their own servers located within their own data center (this issue is not cloud-specific-it is often raised regarding SaaS as well as more general cloud computing services).

Beyond specific laws, regulations, and policies, the people I spoke with described an overall risk question that they asserted enterprises would raise: the risk associated with the cloud provider itself. Some people noted that Amazon's cloud offering isn't their core business. interestingly, however, they described Amazon's core business as "selling books." I think Amazon's business efforts are well beyond books and this response may indicate an unfamiliarity with the total range of Amazon's offerings; nevertheless, the question of Amazon's core competence and focus on computing is valid, and might even be more of an issue if the company is spread across many initiatives.

For the other cloud providers, which are probably considered more "traditional" technology companies, this issue of core competence and focus probably isn't a direct concern. It's still a concern, though, since one might discern that the cloud offering each provides is not its main business focus; therefore, the company might, in some future circumstance, decide that its cloud offering is a distraction or a financial drag and discontinue the service. Google's recent shuttering of several of its services gives credence to this type of concern.

So, all in all, there are a number of risk-related concerns that enterprises might have regarding their use of cloud computing, ranging from specific issues imposed by law or regulations to general operational risk imposed in dependency upon an outside provider.

However, many of the people who proffer these concerns do so eagerly and, to my mind, too broadly. Let me explain.

First, many of the legal and regulatory risks assigned to cloud providers are understood by them. They recognize that they will need to address them in order to attract mainstream business users. However, in order to get started and build experience and momentum, they have not focused on very challenging functionality and processes; instead, Amazon, for example, has been primarily targeted at startups and non-critical corporate apps.

To my mind, this is a smart strategy. One has only to look at SAP's protracted effort to deliver an on-demand service with equivalent features to its packaged offering to understand how attempting to meet demanding capability right out of the chute can seriously retard any progress. However, I am confident that cloud providers will continue to extend their capabilities in order to address these risk aspects.

Moreover, many people who discuss this type of risk characterize it as something that can only be addressed by internal data centers, i.e., the very nature of cloud computing precludes its ability to address risk characteristics. I spoke to a colleague, John Weathington, whose company, Excellent Management Systems, implements compliance systems to manage risk, and he questioned the notion that clouds are inherently unable to fit into a compliance framework, citing compliance as being a mix of policy, process, and technology. To his way of thinking, asserting that risk management cannot be aligned with cloud computing indicates a limited understanding of compliance management.

A second factor that too broadly characterizes cloud computing as too risky is an over-optimistic view of current risk management practices. In discussing this with John, he shared some examples where companies do not manage compliance properly (or, really, at all) in their internal IT systems. The old saw about people, glass houses, and stones seems applicable here. In a way, this attitude reflects a common human condition: underestimating the risks associated with current conditions while overestimating the risks of something new. However, criticizing cloud computing as incapable of supporting risk management while overlooking current risk management shortcomings doesn't really help, and can make the person criticizing look reactive rather than reflective.

Associated with this second factor, but different-a third factor-is the easy, but damaging approach of treating all risks like the very worst scenario. In other words, identifying some data requirement as clearly demanding onsite storage with heavy controls and reaching a general conclusion that cloud computing is too risky for every system. Pointing out some situations or data management requirements cannot be met by cloud computing poses the danger that leveraging the cloud will be rejected for all systems or scenarios. You may disbelieve that this kind of overly-broad assessment goes on, but I have heard people drop phrases like "what about HIPAA" into a conversation and then turn contentedly to other topics, confident that the issue has been disposed of.

Some of this reflexive risk assertion is understandable, though. The lack of enthusiasm on the part of many IT organizations to embrace external clouds due to the putative risk might be attributed to risk asymmetry they face. That is to say, they can get into a lot of trouble if something goes wrong about data, but there isn't that much upside for implementing a risk assessment process and reducing costs by leveraging outside cloud resources. One might say IT organizations are paid to be the worrywarts regarding data security, which isn't really that much fun, but would affect their perspective on risk and could motivate them to be very conservative on this subject.

However, given the very real pressures to examine cloud computing for reasons of IT agility and overall cost examination, resisting it by a bland contention that "cloud computing is too risky; after all, what about X?" where X is some law or regulation the organization operates under is probably not a good strategy.

So what should you do to address the issue of risk management in cloud computing?

One, understand what your risk and compliance requirements really are and how you address those things today in internal systems. Nothing looks worse that asserting that cloud computing isn't appropriate because of risk and being asked "how do we handle that today?" and not having a solid answer.

Second, (assuming you haven't done so already) a risk assessment mechanism to define levels of risk and make it part of the system development lifecycle. Without this, it's impossible to evaluate whether a given system is a good candidate for operating in the cloud.

Third, assess your potential cloud hosting operators for their risk management practices. With this in hand, projects can have their risk assessments mapped against the cloud provider and a decision can be reached about whether cloud hosting is appropriate for this system.

The cloud hosting risk assessment should be treated as a dynamic target, not a static situation. The entire field is developing quite rapidly, and today's evaluation will probably not be accurate six months hence.

Pressure is going to be applied to IT organizations over the next twelve months regarding costs and, particularly, whether cloud computing is being considered as a deployment option. With a risk management framework in place, appropriate decisions can be made - and justified.

SLA: MIA

One of the most common concerns regarding cloud computing is the potential for downtime-time the system isn't available for use. This is a critical issue for line-of-business apps, since every minute of downtime is a minute that some important business function can't be performed. Key business apps include taking orders, interacting with customers, managing work processes, and so on. Certainly ERP systems would fall into this category, as would vertical applications for many industries; for example, computerized machining software for a manufacturing firm, or software monitoring sensors in industries like oil and gas, power plants, and so on.

Faced with the importance of application availability, many respond to the potential use of cloud-based applications with caution or even horror. This concern is further exacerbated by the fact that some cloud providers don't offer SLAs and some offer inadequate SLAs (in terms of guaranteed uptime.)

Underlying all of these expressed concerns is the suspicion that one cannot really trust cloud providers to keep systems up and running; one might almost call it a limbic apprehension at depending upon an outside organization for key business continuity. And, to be fair, cloud providers have suffered outages. Salesforce endured several in recent years, and Amazon also has had one or two not so long ago.

Put this way, it's understandable that organizations might describe the concern regarding this all-important meeting of critical business systems with cloud provider reliability as an SLA issue.

Is that the best way to comprehend the issue, or even to characterize it, though?

If one looks at the use of SLAs in other contexts, they are sometimes part of commitments within companies-when, say, the marketing department has IT implement a new system, IT guarantees a certain level of availability. More commonly, though, SLAs are part of outsource agreements, where a company selects an external provider like EDS to operate its IT systems.

And certainly, there's lots of attention on SLAs in that arena. A Google search on "outsource SLA" turns up pages of "best practices," institutes ready to assist in drafting contracts containing SLAs, advice articles on the key principles of SLAs-a panoply of assistance in creating air-tight SLA requirements. A Google search for "outsource SLA success," unfortunately turns up nary a link. So one might assume that an SLA doesn't necessarily assist in obtaining high quality uptime, but provides the basis for conflict negotiation when things don't go well-something like a pre-nuptial agreement.

So if the purpose of an SLA is more after-the-fact conflict resolution guidelines, the implication is that many of the situations "covered" by SLAs don't go very well; in other words, after all the best practices seminars, all the narrow-eyed negotiation (BTW, doesn't it seem incredibly wasteful that these things are negotiated on a one-off basis for every contract?), all the electrons have been sacrificed in articles about SLAs, they don't accomplish that much regarding the fundamental requirement: system availability. Why could that be?

First, the obvious problem I've just alluded to: the presence of an SLA doesn't necessarily change actual operations; it just provides a vehicle to argue over. The point is system uptime, not having a contract point to allow lawyers to fulfill their destiny.

Second, SLAs, in the end, don't typically protect organizations from what they're designed to: loss from system uptime. SLAs are usually limited to the cost of the hosting service itself, not the opportunity cost of the outage (i.e., the amount of money the user company lost or didn't make). So besides being ineffective, SLAs don't really have any teeth when it comes to financial penalty for the provider. I'll admit that for internal SLAs, the penalty might be job loss for the responsible managers, which is pretty emotionally significant, but the SLA definitely doesn't result in making the damaged party whole. After all, having the IT department pay the marketing department is just transferring money from one pocket to another.

Finally, the presence of an SLA incents the providing organization to behavior that meets the letter of the agreement, but may not meet the real needs of the user; moreover, the harder the negotiating went, the more likely the provider is to "work to rule," meaning fulfill the bare requirements of the agreement rather than solving the problem. There's nothing more irritating than coming to an outside service provider with a real need and having it dismissed as outside the scope of the agreement. Grrrr!

Given these-if not shortcomings, then challenges, shall we say-of SLAs, does that mean their absence or questionable quality for cloud computing providers means nothing?

No.

However, one should keep the service levels of cloud computing in perspective, with or without an SLA in place.

Remember, the objective is service availability, not a contractual commitment that is only loosely tied to the objective. So here are some recommendations:

One, look for an SLA, but remember it's a target to be aimed for, not an ultimatum that magically makes everything work. And keep in mind that cloud providers, just like all outsourcers, write their SLAs to minimize their financial exposure by limiting payment to cost of the lost service, not financial effect of the lost service.

Two, use an appropriate comparison yardstick. The issue isn't what cloud providers will put in writing, it's how will a cloud provider stacks up against the available alternatives. If you're using an outsourcer that consistently fails to meet its uptime commitments, surely it makes sense to try something new? And if the comparison is the external cloud provider versus your internal IT group, the same evaluation makes sense.

Third, remember that the quality of internal uptime is directly related to the sophistication of the IT organization. While large organizations can afford significant IT staffs and sophisticated data centers, much of the world limps by with underfunded data centers, poor automation, and shorthanded operations staffs. They run from emergency to emergency, and uptime is haphazard. For these kind of organizations, a cloud provider may be a significant improvement in quality of service.

Fourth, even if you're satisfied with the quality of your current uptime, examine what it costs you to achieve it. If you're using lots of manual intervention, people on call, staffing around the clock, you may be meeting uptime requirements very expensively. A comparison of uptime and cost between the cloud and internal efforts (or outsourced services) may be instructive. I spoke to a fellow from Google operations who noted that at the scale it operates, manual management is unthinkable; nothing goes into production until it's been fully automated. If you're getting uptime the old-fashioned way-plenty of elbow grease-it may be far better, economically speaking, to consider using the cloud.

Fifth, and a corollary to the last point, even if there are some apps that absolutely, positively have to be managed locally due to stringent uptime requirements, recognize that this does not cover the entirety of your application portfolio. Many applications do not impose such strict uptime requirements; managing them in the same management framework and carrying the same operations costs as the mission-critical apps is financially irresponsible. Examine your application portfolio, both current and future, and sort them according to hard uptime requirements. Evaluate whether some could be migrated to a lower-cost environment whose likely uptime capability will be acceptable-and then track your experience with those apps to get a feel for real-world outcomes regarding cloud uptime. That will give you the data to determine whether more critical apps can be moved to the cloud as well.

In a sense, the last recommendation is similar to the one in the "Risk" posting in this series. One of the recommendations in that posting is to evaluate your application portfolio according to its risk profile to identify those which can safely be migrated to a cloud infrastructure. This uptime assessment is another evaluation criteria to be applied in an assessment framework.

So "cloud SLA" is not an oxymoron; neither is it a reason to avoid experimenting and evaluating how cloud computing can help you perform IT operations more effectively.

Bernard Golden is CEO of consulting firm HyperStratus, which specialises in virtualisation, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualisation to date.

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.

Tags new technologiesUtility Computingstrategycloud comptuing

Show Comments

Market Place