We know by now that cloud computing offers significant benefits, but is it ready for mission critical applications? After all, failure of these applications can result in the failure of the business.
According to a recent survey, concerns about security and availability are the barriers stopping organizations from moving mission critical apps to the cloud. Only about 30%-40% percent of large enterprise applications are in the cloud most of which are for collaboration, conferencing, email and sales force automation.
If the concerns can be allayed, there are benefits the cloud can offer for critical applications, the most important being scalability and reliability. In many cases cloud service providers can provide faster access to additional computing resources and have implemented a higher degree of infrastructure redundancy than individual enterprises have. These same providers can also elastically scale computing resources much more efficiently and cost-effectively, as resources can be shared across their customers.
Service providers have taken note of industry concerns and started to offer services built for these critical applications with higher Service Level Agreements. A perfect example is The Bunker, a UK-based cloud services provider, which provides ultra-secure hosting and cloud services for financial services organizations, technology companies, local government and other regulated businesses that place a premium on secure and resilient IT services. In fact, the company's systems are housed in state-of-the-art, nuclear bomb-proof data centers. This is serious security for their government, financial services and other business customers.
However, cloud is not a one-size-fits-all solution. Some applications are a good fit for the public cloud, others are better suited for a private cloud, and some applications shouldn't or can't move to the cloud at all. Additionally, cloud services vary greatly from one provider to another, with different SLAs and capabilities.
When evaluating whether a particular application is suitable for a cloud, some of the factors to consider are redundancy, the application's ability to migrate, performance, security and cost:
What level of system and network redundancy is needed to ensure 7x24 hour application availability, and at what cost? While the highest level of availability could be achieved by duplicating all systems, storage and networking infrastructure, this is an extremely costly scenario that isn't realistic for most enterprises. Instead, an N+1 approach, where individual assets can be the backup for multiple systems, can provide the availability level required at a much lower cost.
If considering a public cloud provider, have they proven they can provide the SLA the app needs? Should you have an alternative provider that can take over immediately in case of a failure or do you need to have a backup system in place internally? What does that require in terms of duplicating data across two sites?
Is the application cloud-ready? Many legacy applications weren't written with portability or virtualization in mind, and may be tied to very specific environments that can't be duplicated in the cloud. For example, they may run on older operating systems, require out of date drivers, use legacy databases or lack proper security. Other apps, like database or performance-intensive applications, may be best run on bare-metal. These include many financial services trading applications, where revenue is directly tied to how fast a transaction can be completed. Furthermore, regulations and industry practices may mean that public clouds are not yet a good choice for a subset of applications.
Can your cloud provider support these requirements, and with the level of availability needed? Is this a good case for an internal cloud service instead?
Can the service provider deliver the level of performance your application needs? Modern applications are written in tiers and each likely has a different performance and availability requirement. The front end, or web tier, can be easily distributed to provide the scalability and availability an application requires. The middle tier that provides the business logic may also have been written to accommodate a distributed server model for easy scaling and high availability. The back end, or data tier, is the most likely to require a single physical computing environment, so it is imperative that a defined approach to scaling and availability be put in place.
The questions to ask therefore are: Can the cloud service enable dynamic scaling of compute nodes when required? Will your data volume overwhelm the network pipe to the cloud? Is the service provider set up to handle complex tiered applications? Does your database need a bare metal server and if so what does it take to provision and manage it?
What level of security is required and can a public cloud provider meet it? While this has been a big focus of service providers in recent years, this still remains a concern. In fact, it is the primary reason that within the US federal government, private cloud spending in 2014 is projected to be $1.7 billion vs. $118.3 million on public clouds. However, in 2010, to increase the adoption of public cloud services by US government agencies, a program called FedRAMP was created to streamline the process of determining whether cloud services meet Federal security requirements. (In May 2013, Amazon Web Services passed the FedRAMP cloud security assessment, making it one of the first commercial cloud providers to be certified. This should speed the adoption of public cloud services in the US federal government.
In the enterprise world, while security concerns remain, they are lessening as enterprises get more experience with cloud services and as service providers focus on enhancing security. The recent Future of Cloud Survey from North Bridge Venture Partners and GigaOM shows that security as an inhibitor is declining year-over-year from 55% of respondents in 2012 to 46% in 2013.
As organizations are realizing the benefits of cloud computing, an increasing number are choosing an internal option. All recent industry surveys point to a growing use of internal clouds as IT starts to transform internal infrastructure into more flexible and cost effective private cloud services. Hybrid clouds, which bridge public and private clouds, are forecasted to grow at a considerable rate, with the Future of Cloud survey predicting that hybrid clouds will be the most prevalent model within five years.
Cost factors are always an important consideration. Given the level of systems and network redundancy and availability required for mission critical applications, an analysis of the cost of hosting these apps in-house versus the ongoing cost of using public cloud services must be done. Some of the most important factors to consider in the analysis include:
" Internal hardware costs are new systems required or can you leverage existing infrastructure? Remember to consider the costs for failover and disaster recovery computing, storage and network infrastructure." License costs for third party software such as hypervisors, databases and applications." Internal operational expenses for managing internal infrastructure and software, power and cooling costs etc." For capital purchases, the cost of capital leases and depreciation.
While a quick analysis may indicate that an in-house deployment is cheaper than ongoing cloud expenses, that may not be the case. Cloud providers are likely to have better economies of scale that translates into lower operating costs, higher availability and redundancy, and easier scalability.
All indications are that mission critical applications are moving to the cloud at an increasing rate as new and more secure cloud services are offered in the marketplace. Some legacy apps are being replaced by SaaS solutions, cutting down on the number of apps that can't be moved.
The important point is that decisions about moving to the cloud need to start from an application standpoint, with a thorough consideration about viability, availability, performance, security and cost. Doing this work will point to the right cloud public, private or hybrid or maybe no cloud at all.
Egenera's industry leading cloud and data center infrastructure management software, Egenera PAN Cloud Director and PAN Manager software provide a simple way to quickly design, deploy and manage IT services while guaranteeing those cloud services automatically meet the security, performance and availability levels required by the business. Headquartered in Boxborough, Mass., Egenera has thousands of production installations globally, including premier enterprise data centers, service providers and government agencies. For more information on the company, please visit www.egenera.com.
Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.