Is one enough - what next for New Zealand's internet connectivity?
- 29 September, 2012 22:00
New Zealand is connected to the internet via a single undersea cable. Unlike a typical point-to-point cable, the New Zealand cable circles a route passing through Muriwai and Auckland in a fibre ring. If the cable is cut, the data travels in the opposite direction and goes the long way around. The diverse routing provides greater resiliency in the event of a physical failure.
There's been a lot of debate about the requirement for another undersea cable. The reasons given are 1) to eliminate the present supplier monopoly 2) to mitigate the risk of partial/total service failure and 3) to cope with the increase in demand for IP services.
The lack of a second cable hasn't been a major issue to date, however the failure of the Pacific Fibre initiative has highlighted the vulnerability of the current situation. Although nothing has changed, a shift in thinking is already starting to impact decision making across the country and more importantly government. This is the legacy that Pacific Fibre has left in the minds of those who are considering the future growth and development of the economy.
The impact to cloud computing
The present situation will impact New Zealand in a number of different ways but the greatest impact will be felt in the area of cloud computing.
Cloud computing has come of age in the last few years bringing with it immense benefits and significant cost savings. More and more services are being offered via the cloud and some analysts predict that local storage will become a thing of the past as internet connection speeds increase and cloud storage drops in price.
Cloud technology is entirely dependent on network connectivity to deliver service to the end-user. A single point of failure, such as a national fibre connection to the internet, could hinder private sector take up of cloud services.
As part of the fall-out from Pacific Fibre, it has been reported in the press that the government is restricting cloud choices to ‘onshore only’ for government departments and government provided services.
Interestingly this policy would also logically apply to Australia-based cloud offerings. This is likely to have a knock-on impact by affecting the decisions taken by the private sector.
The current situation presents organisations with an increased risk if offshore cloud is to be adopted. The risk is low although this does increase with time as the infrastructure ages.
Offshore versus onshore
The benefits to using offshore versus onshore cloud are significant. Offshore cloud offerings are typically better, faster and cheaper, for a number of reasons. They benefit from vast economies of scale not available to onshore offerings. They exploit competitive advantages such as low tax regimes, cheap connectivity and reduced build/run costs.
Based on end-user demand and also competition amongst providers, offshore cloud innovates faster, new features appear sooner. The hosting market already validates this with differences in price, features, functionality, availability and innovation. A quick comparison of private versus corporate e-mail serves as a further example. A private e-mail account usually comes with several gigabytes of storage and a plethora of features, for little to no end-user cost. A corporate email account typically comes with a few hundred megabytes of storage and a much reduced feature set, costing hundreds of dollars per user per year. More importantly, the majority of private email services used by New Zealanders are hosted offshore e.g. Gmail, Xtra and Outlook. In the event of a fibre cable failure the situation would be dire for those users.
Offshore cloud services can typically be bought on an uptime/availability basis; the greater the uptime, the higher the price. Differentiating a service across both feature set and availability is unique to cloud. It requires a level of scale, sophistication and management only achievable by big corporations e.g. Google, Amazon, Apple and Microsoft, to name a few. Due to the vast scale, offshore cloud is managed to the highest level of availability using sophisticated tools, the very latest practices, the most efficient processes and of course in the most advanced facilities. All of this is designed to ensure service continuity and availability above all else. The enforcement of service level agreements by users requires the necessary investment in resiliency, redundancy and reporting tools. As a consequence, operational/service availability issues are less likely with offshore than onshore solutions. The risk of an offshore cable cut is much less likely than an onshore operational issue/security breach.
Despite the very low likelihood of a physical fibre problem with the Southern Cross cable, it is still driving policy and behaviour much to the detriment of progress in the cloud arena.
Privacy issues regarding offshore cloud offerings have also been largely overstated. The threat of data breaches offshore is no more likely than onshore offerings. There is no government policy which mandates that the private sector hold data onshore only. At present there is no delineation between what type of data is held where and for what purpose, hence New Zealand originated data is stored in various locations both domestically and overseas.
Most IP traffic in New Zealand already travels offshore and the government's investment in UFB (Ultra-Fast Broadband) is likely to push this trend even further.
Competitive advantages in all areas of the value chain also drive cost down and keep margins low. As a consequence, offshore cloud centres are appearing in locations not previously considered viable. Google's recent commitment to a centre in Chile highlights the decision making process behind these choices. Tax breaks, subsidies, and other 'normal' incentives are just part of the equation. Additionally factors such as climate, seismic stability, cost effective electricity supply/generation and of course connectivity are increasingly coming in to play. Interestingly, Google would not likely build a datacentre in New Zealand because of the internet connectivity issue. Google’s centres are in locations with multiple internet connections.
Onshore only policy and the implications
Onshore cloud would have to be run by onshore personnel in order to follow through on the logic of mitigating a failure of the single undersea cable. A permanent and extremely capable onshore presence would always be required even if there was an offshore/onshore ratio to reduce costs. As a consequence onshore cloud is likely to be more expensive at all points in the lifecycle i.e. design, build, operate and change.
Local companies will be unable to follow global trends towards cloud solutions if barriers to adoption exist such as price, reduced feature set and service risk.
International providers of cloud services would have to locate services onshore to meet the requirements of government or large organisations with an onshore only policy. This is highly unlikely given the size of the New Zealand opportunity.
Risks of the present situation
The Southern Cross cable represents a single point of physical failure although the current cable is a ring with built-in diverse routing. It also represents a single logical point of failure, which is of greater concern but much less well understood.
A cable break undersea can happen for a number of different reasons but a logical service interruption could also occur. A denial of service attack could be carried out against a single service provider, exploiting vulnerabilities that may exist in the security layers and processes present. A second cable, run by a different service provider, with different infrastructure, security and processes, would offer a degree of reassurance in the presence of militant hacking. A denial of service attack could have a significant impact over a much longer duration particularly if the attack was carried out in bursts to avoid detection of the mechanisms used, a common practice.
The time to restore a broken submarine cable could be anywhere between 24-72 hours depending on the location, depth and severity. Bangladesh suffered a cable break of the Sea-Me-We-4 submarine cable on 07/06/12 and the entire country was impacted.
The cut occurred 60 km off the coast of Singapore. The incident crippled the Internet for the 158 million people in Bangladesh but the impact was felt further afield. Service providers who use the cable for transit scrambled to find alternative capacity via other routes, but for the majority of internet users, the situation took a week to restore.
International providers who use the cable to service parts of South Asia and the Middle East were left considering alternatives, calling into question the economics of buying capacity on a path with no redundancy.
The situation hasn't changed, but what has changed is the mindset. The legacy of the Pacific Fibre initiative has been to bring into the spotlight the vulnerability that exists around a single undersea cable connecting New Zealand to the internet.
Businesses will be faced with a choice of adopting cloud services now, in the present situation or holding off until local offerings are available. The onshore cloud services will take time to build and establish themselves in the marketplace.
A good solution which hasn't been brought to the foreground as yet would be to lay another cable between Australia and New Zealand. Apart from being cheaper, it would also carry much less commercial risk for the participants. The routing of such a cable would pose its own challenges but the rewards to New Zealand would be significant, long lasting and go well beyond the economic benefits brought by the project.
Bradley de Souza is an internationally recognised CIO/CTO who has specialised in change and transformation across industries around the world. Reach him firstname.lastname@example.org.
Brace for change: An interview with Tony Hayes of ISACA
Rethinking the worst case
The brand called CIO
Motorola turns to the Moto G's price to reserve its smartphone fortunes
Virtual desktop computing service: The next cloud disruptor?
Virtual Server Backup Software Buyer’s Guide
Virtualization affords organizations multiple opportunities to reduce power, optimize hardware utilization, improve application availability and ultimately drive down costs. In light of its benefits, it is no surprise that virtualization penetration has surpassed 50% of all server workloads and continues to grow. In this guide, we evaluate prominent virtual server backup software solutions and identify their strengths and weaknesses.
Building a Strategic Archive
For years, most companies have dealt with the evolving dynamics of data archiving by addressing an immediate need rather than building a long-term strategy. But over time, putting all information on costly storage is likely to be very expensive. This whitepaper explains why it’s time for organizations to start to strategically evaluate archive solutions for capabilities they need, both now and in the future. While no technology is future proof, an archiving solution can make you “future ready.”
Deliver Enterprise Mobility with Security and Performance
Mobility and the consumerisation of IT pose key challenges for IT around scalability, security and application visibility. In this whitepaper, we look at complete, integrated and scalable solutions that deliver apps and data to any device with full security and a high-performance user experience. Learn more!