If I had to summarise my assessment of the recent CloudConnect conference, it would be this: Attention regarding cloud computing is rapidly moving toward the pragmatics of using it and away from the theories of studying it.
Stories by Bernard Golden
One of the most interesting things we encounter when my consulting company works with clients is their reaction to the infrastructure architectures of cloud providers. When we explain that they achieve robustness by keeping multiple copies of data on commodity hardware, rather than the traditional model of investing in expensive hardware to improve device robustness, we observe a visceral shudder in people.
This week I saw two articles that captured the two visions of IT that will dominate the future. Both were interviews with senior IT leaders, one a CIO of a major technology company, the other a senior executive with a leading system integrator. One article depicted a vision of IT as a future of standardized, commodity offerings, while the other portrayed IT as a critical part of every company's business offerings. Two visions of ITs role in stark contrast to one another. Each seems to obviate the other. But is that really true? My take is that both views are true, and the CIO of the future has to push one to make room to achieve the other.
It's been an incredibly interesting, exciting, and tumultuous year for cloud computing. But, as the saying goes, "you ain't seen nothin' yet." Next year will be one in which the pedal hits the metal, resulting in enormous acceleration for cloud computing. One way to look at it is that next year will see the coming to fruition of a number of trends and initiatives that were launched this year.
If you want to understand the key driver of the cloud computing revolution, you owe it to yourself to read Microsoft's new white paper The Economics of the Cloud. In it, authors Rolf Harms and Michael Yamartino lay out an analysis of the economics that underlie cloud computing, and demonstrate in a convincing fashion why the shift to this new technology platform is inevitable. A copy of the paper is available as a download from the blog posting by the authors.
After a brief introduction, the authors lay out a central thesis: despite initial concerns about shortcomings in new technology offerings, "historically, underlying economics have a much stronger impact on the direction and speed of disruptions, as technological challenges are resolved or overcome through the rapid innovation we've grown accustomed to."
I continue to encounter an interesting phenomenon regarding cloud computing as I speak at conferences, present to IT groups, and talk to businesspeople interested in the subject. Most people recognize the importance of cloud computing, acknowledge the relevance to their environments, and describe their initiatives.
I saw a fascinating interview on Forbes.com last week that implies the death of IT as we know it. In it, Michael Chui, senior fellow at the McKinsey Global Institute, described a trend that his firm views as the way IT will be done in the future.
I came across a link to a new report from IDC called the 2010 Digital Universe Study. The report echoes what we've been telling our clients for the past year: the projections of the past few years about the growth of data significantly underestimate how much data is going to be created.
Some highlights of the report:
I was talking to a colleague who works for a large technology vendor. His company offers products to enable IT organisations to construct cloud infrastructures inside their own data centres - to turn existing stable, static computing environments into ones that support scalability, agility, and dynamic applications. The company's progress on its products has been impressive, early implementations successful, and interest from their customer base (infrastructure groups within large IT organisations) high. However, he shared an apprehension with me regarding product adoption. "I'm concerned that while our customers are working on a very deliberate plan that will take a couple of years - doing their research, performing a pilot, evaluating the economics, making the capital investment business case - that the apps side of the house will just charge ahead using on-demand public cloud providers like Amazon." While he was worried about this trend from the point of view of how it will affect the prospects for his company's products, my mind moved toward a different outcome: the boomerang.
With regard to the issue he's worried about, my sense is that his concern is quite valid. Many software engineers have moved to cloud environments for development due to immediate resource availability and low cost. It's widespread. I noted in a blog post a few months ago my amusement regarding one large software vendor's senior executive's rant. He and I were both on a cloud computing panel and in his remarks he railed against developers using Amazon, citing intellectual property concerns. After the panel was over, as the participants were chit-chatting, he said that he found it frustrating because developers in his own company were using Amazon quite widely, despite being warned against it, because it was so much easier than getting computing resources through the official channels. The phrase "hoisted on one's own petard" sprang to my mind.
Every revolution results in winners and losers - after the dust settles. During the revolution, chaos occurs as people attempt to discern if this is the real thing or just a minor rebellion. All parties put forward their positions, attempting to convince onlookers that theirs is the path forward. Meanwhile, established practices and institutions are disrupted and even overturned - perhaps temporarily or maybe permanently. Eventually, the results shake out and it becomes clear which viewpoint prevails and becomes the new established practice - and in its turn becomes the incumbent, ripe for disruption.
This is true in technology as in every other domain. In the tech business, we often get caught up in focusing on vendor winners and losers. Point to the client/server revolution, and it's obvious - Microsoft and Intel. Over on the loser side stand the minicomputer vendors. This winner/loser phenomenon can be seen in every significant technology shift (and indeed, one shift's winner can become a future loser). This is understandable: we all love conflict and the vendor wars make for great press.
You've probably seen a hundred-or even a thousand-articles criticizing cloud computing Service Level Agreements (SLAs). A common example in those articles is the putatively low Amazon Web Services SLA. Typically authors of these kind of articles go on to cite recent outages by cloud providers, implying (or stating directly) that cloud computing falls woefully short of the true SLA requirements of enterprises, often described as "five nines," i.e., 99.999 per cent availability.
If you've been reading this series, you now have a better understanding of the much-discussed term "private cloud." In the previous two parts of this series, I described the features and service capabilities of private clouds.
McKinsey, the doyen of strategy consultants,published a report on cloud computing last week featuring a disguised real-world case study. While the report doesn't explicitly state the fact, it seems that the paper is a summary of the results of a strategy project with a financial services firm, which apparently engaged McKinsey to assess whether it would make sense to move all of its systems to Amazon Web Services.
Nick Carr was right and I was wrong. Sort of, anyway.
Serendipitously, both IBM and HP held events this week to describe their cloud computing initiatives. Their presentations offered insight into what they're doing and provide some food for thought to IT organizations assessing what cloud computing means to their future-as well as some information that might give pause as well.
IBM announced a number of separate things as well as doing an actual demo of cloud capability. First, it's created a Cloud Computing division that reports directly to IBM's head-equivalent to the software, sevices, and hardware divisions. Second, it announced a raft of cloud computing offerings, including services relating to cloud strategy, transitioning current data centers to cloud-enabled data centers, and IBM facilities to enable testing of cloud solutions. Third, it announced a capability to allow IT organizations to use external clouds to migrate workloads from internal cloud data centers to external clouds. The demo showed how an application with multiple systems running software could have some of those systems live migrated to an external cloud. This capability is done through Tivoli, which manages all the systems. Juniper participated in the event, with Juniper networking underlying the application migration with MPLS-based connectivity providing high-bandwidth communication between the internal data center and the public cloud data center.