Big trade shows like Interop can be confusing. So many vendors, so much noise, so much spin, so little clarity. Searching for technology trends among the tchochke seekers, spokes-models and aggressive PR reps can feel like a hopeless task.
However, as these shows drag on, people let their defenses down, and by day two or three, you can discover what the real top-of-mind concerns are for CIOs, security professionals, mobility managers and other key IT professionals tasked with moving technology forward in their organisations.
Keep in mind that any one reporter can only cover so much ground at shows like Interop. Topics left out that could have easily made this list include BYOD, social media management, OpenFlow and Big Data.
After dozens of meetings and a slew of informal conversations, here are five key takeaways from Interop 2012 that CIOs should be aware of:
1. Interoperability is making a comeback.
If you've been around for a while, you may remember that Interop was originally called Networld+Interop. The focus of the show was originally interoperability and improvements in networking. In the intervening years, key standards have been ironed out and, on the proprietary side, vendor lock has been tolerated.
The focus on interoperability was lost.
However, as organisations move to new computing models, old issues like interoperability are becoming current again. This isn't necessarily a step backwards. For instance, in the cloud, vendors have the opportunity to do things better this time around by, for instance, baking security into key networking devices, rather than bolting it on later.
The best example of this I saw at Interop 2012 was the Extreme Networks-Fortinet partnership. Normally, partnership announcements aren't very exciting. They're rarely as important as the partners seem to think. In this instance, though, the two have combined to bring to market a high-performance cloud switch designed for multi-tenant data centers. What's unique is that Fortinet security (firewall, IPS, AV, content filtering, app control) is built into the switch.
A solution like this removes many of the risks associated with multi-tenant environments, while also giving service providers the capability to offer advanced features as value-added services.
2. Security needs to be throughout the network, and that's no longer empty talk.
Another vendor seeking to wed security and networking is Vyatta. Its new vPlane technology seeks to relieve the traffic bottlenecks that are emerging in virtualised datacentres. Application density and multi-tenancy are creating networking challenges that can't be solved with legacy tools.
Vyatta vPlane is a Layer 3 router forwarding plane that is architecturally separate from the network controller. Leveraging the new fast-path architecture on an Intel Westmere-class system, Vyatta contends that vPlane is capable of delivering more than 8 million packets per second per core, which the company says is more than a 10 times improvement over the norm. Additionally, since vPlane scales linearly with the addition of cores, an entire Westmere system can deliver 35 million packets per second in only a single rack-unit of datacenter space.
Those speeds are impressive, but what is news worthy is that security is again part of the basic feature set of Vyatta vPlane. Various Layer 3 security functions (firewall, VP, web filtering, etc.) are built into the basic fabric of the solution.
"It's no longer sufficient to have security just at the network perimeter," said Scott Sneddon, Vyatta's director of cloud solutions. "Security must be baked into cloud infrastructures. In virtualised and cloud environments, security should be everywhere in the network, not just at the edges."
CloudPassage offers a similar assessment of cloud security. "CIOs must realize that trends like BYOD and cloud mean that single-stop shopping is no longer viable. Innovation is rampant, and smart organizations will seek out best-of-breed solutions," said Rand Wacker, CEO of cloud security company CloudPassage. However, no solution can be considered best-of-breed these days if interoperability isn't one of its checkmark features.
3. Cloud infrastructure is becoming a commodity, but new services will help providers protect their profits.
Even though many market sectors are still wary of the cloud and are only embracing it for small, non-mission critical applications, cloud computing is slowly becoming the norm. As cloud adoption continues, basic cloud services, especially cloud capacity and infrastructure, are getting commoditized.
What this means is that service providers will need new revenue streams, and those new revenue streams will come from wrapping services around cloud basics. We've grown used to the "servicization" of everything from CRM apps to security to storage, but complex technologies such as WAN optimization, content delivery networks (CDN) and even unified communications (UC) are now being decoupled from hardware and delivered as services.
Of course, these services push prices down in their market spaces, but they also open up huge new market segments (particularly small business and the mid-market) to technologies that used to be prohibitively expensive.
"The WAN, simply put, has poor characteristics in terms of bandwidth, or latency. This is true as distance increases in order of magnitude between a global workforce (created to be closer to customers) and increasingly centralised data (to rein in IT costs)," said Ajit Gupta, CEO of WAN optimisation as a Service provider Aryaka. "The exchange of mission-critical applications, large files and big data becomes a horrific experience as downloads can take minutes, even hours instead of milliseconds and backup takes days, leaving corporations at risk in the event of a disaster."
Aryaka says it intends to solve that problem by removing WAN optimisation from expensive, dedicated hardware and delivering it as a pay-for-what-you-use service.
OnApp is offering CDN capabilities as a pay-as-you-go service, hoping to help hosting companies and smaller cloud providers compete against behemoths like Amazon. And Mitel has shifted its UC suite to the cloud and turned it into a service, which means that businesses no longer have to worry about managing complex PBXes.
4. As resources move into the cloud, IT operations are forced to navigate in the dark.
Resources in public clouds are difficult for IT operations to manage. If an application performs poorly, what is the main culprit? Is it an overloaded router? Is it a storage problem? Is network latency the issue?
If IT ops teams are forced to rely on legacy Application Performance Monitoring (APM) tools, they're basically flying blind.
Network visibility is something I heard a lot of chatter about on the show floor, and companies like ExtraHop Networks, which promises to deliver cloud visibility, were swamped with attendees.
"The old model relies on agents," said Jesse Rothstein, ExtraHop's CEO. "That's fine for a development environment, but production environments, especially ones that are built on cloud infrastructures, require a different approach."
To keep up with dynamic, ever-changing virtualised infrastructures, ExtraHop argues that the only approach that can provide real-time visibility is one that relies on information gleaned from the network itself. ExtraHop's system performs full-stream reassembly and full content analysis of network traffic to extract performance and health metrics.
This isn't the only approach to APM I saw at Interop, but it was the one that seemed to get the most attention from attendees.
5. DNS vulnerabilities are becoming too risky to ignore.
The dependencies that organisations from all industries have on their web presence for things like commerce and everyday business processes and transactions, coupled with the ease with which sophisticated cyber-attacks can be launched today, has brought a renewed focus on old attacks, such as distributed denial of service (DDoS) attacks.
But it's not just attacks that can take down or slow sites. As organisations continue to shift away from traditional architectures and towards web services, they are starting to bump up against the limitations of legacy DNS management.
"If DNS performance isn't optimised, or worse, a DNS error disrupts service, critical processes and services can become unavailable often resulting in revenue loss and damage to reputation," said Ben Petro, senior vice president of Verisign's Network Intelligence and Availability group.
Petro argues that cloud-based security providers are best-positioned to tackle both cyber-security and DNS management, in order to help companies keep pace with the dynamic nature of attacks and, at the same time, to ensure web-based operations are always secure and available.
"In the past, security professionals have relied on over provisioning of bandwidth and firewalls to help prevent attacks, but these methods have proved costly and ineffective," Petro said. "That's why the cloud is now prevailing as he most efficient solution. Cloud-based security enables quick detection and mitigation of attacks before they can reach the network, allowing companies to stay online while eliminating the need to make significant investments in equipment, infrastructure and expertise."
By linking DNS management to network security, and by moving both to the cloud, organisations are able to save time and money through operational efficiencies, support costs, and economies of scale.
Tying It All Up
If any single thread ties these five items together, it's this: Organisations can't avoid the cloud any longer. Too many vendors are shifting to the cloud for everything from application delivery to security and monitoring. It's chaotic and difficult to manage, but the tools are there. You just can't go to a single provider (like Cisco) as you did in the past. Interop is returning to its roots because interoperability is once again a major obstacle that is impeding progress.