Menu
Menu
How to save the Internet

How to save the Internet

Professor Hannu H. Kari of the Helsinki University of Technology is a smart guy, but most people thought he was just being provocative when he predicted, back in 2001, that the Internet would shut down by 2006. "The reason for this will be that proper users' dissatisfaction will have reached such heights by then that some other system will be needed,"

Kari said, "unless the Internet is improved and made reliable."

Last fall, Kari bolstered his prophecy with statistics. Extrapolating from the growth rates of viruses, worms, spam, phishing and spyware, he concluded that these, combined with "bad people who want to create chaos," would cause the Internet to "collapse!" -- and he stuck to 2006 as the likely time.

Kari holds dozens of patents. He helped invent the technology that enables cell phones to receive data. He's a former head of Mensa Finland. Still, many observers pegged him as an irresponsible doomsayer and, seeing as how he consults for security vendors, a mercenary one at that.

And yet, in the past year, we've witnessed the most disturbingly effective and destructive worm yet, Witty, that not only carried a destructive payload but also proved nearly 100 percent effective at attacking the machines it targeted. Paul Stich, CEO of managed security provider Counterpane Internet Security Inc., reports that attempted attacks on his company's customers multiplied from 70,000 in 2003 to 400,000 in 2004, an increase of over 400 percent. Ed Amoroso, CISO of AT&T Corp., says that among the 2.8 million e-mails sent to his company every day, 2.1 million, or 75 percent, are junk. The increasing clutter of online junk is driving people off the Internet. In a survey by the Pew Internet and American Life Project, 29 percent of respondents reported reducing their use of e-mail because of spam, and more than three-quarters, 77 percent, labeled the act of being online "unpleasant and annoying." Indeed, in December 2003, the Anti-Phishing Working Group reported that more than 90 unique phishing e-mails released in just two months. Less than a year later, in November 2004, there were 8,459 unique phishing e-mails linking to 1,518 sites.

Kari may have overstepped by naming a specific date for the Internet's demise, but fundamentally, he's right. The trend is clear.

"Look, this is war," says Allan Paller, director of research for The SANS Institute. "Most of all, we need will. You lose a war when you lose will."

So far, the information security complex -- vendors, researchers, developers, users, consultants, the government, you -- have demonstrated remarkably little will to wage this war. Instead, we fight fires, pointing hoses at uncontrolled blazes, sometimes inventing new hoses, but never really dousing the flames and never seeking out the fire's source in order to extinguish it.

That's why we concocted this exercise, trolling the infosecurity community to find Big Ideas on how to fix, or begin to fix, this problem.

Our rules were simple: Suggest any Big Idea that you believe could, in a profound way, improve information security. We asked people to think outside the firewall. Some ideas are presented here as submitted; others we elaborated upon. Those who suggested technological tweaks or proposed generic truths ("educate users") were quickly dismissed.

What was left was an impressive, broad and, sometimes, even fun list of Big Ideas to fix information security. Let's hope some take shape before 2006.

Get all the smart people together and give them lots of money

The best place to start is with a Big Idea to concentrate and organize all the other big ideas -- a Manhattan Project for infosecurity.

Daniel Wolf, director of the Information Assurance (IA) Directorate at the National Security Agency, believes that while good research is taking place in pockets, a massive undertaking to tame this problem ought to be instituted. "It's gaining legs," he says of his Big Idea. "(The Department of Defense) put together a fairly significant working group to look at this."

Such a project would require cooperation among Wolf's IA Directorate (2,700 strong, by the way), DoD, private-sector scientists, academic researchers, foreign partners, and some of the national research labs such as Sandia and the Defense Advanced Research Projects Agency. Wolf wouldn't say how much money he'd like to see go to such a project, but The SANS Institute's Paller throws out US$100 million as a good number.

Of course, the project would encounter challenges different from those faced by the actual Manhattan Project. There, engineers started with a blank sheet of paper and built the bomb from scratch. With information security, a 40-year legacy of poor coding and bad architectures must be negotiated. But then again, the fact that it's hard is what makes it so necessary.

Hire a czar

A surgeon general-like figure for security is not only a Big Idea; it's a popular one. Several folks suggest creating some kind of "government leader" or "public CIO for security," none more vocally than Paul Kurtz, the executive director of the Cyber Security Industry Alliance. "We need more leadership at a higher level of government," he says. At the U.S. Department of Homeland Security, he says, cybersecurity has been buried, and he believes DHS should have an assistant secretary-level person for cybersecurity.

At press time, that proposal had been floated but didn't make it into the intelligence reform bill. Meanwhile, a succession of notable leaders for cybersecurity resigned from their DHS posts -- some suggest because of frustration over the low status of the role within the agency. Congress even explored the possibility of moving government oversight of cybersecurity from DHS to the Office of Management and Budget.

"Somehow, the surgeon general has this special place with us," says Scott Charney, chief security strategist of Microsoft Corp. "We don't have the focal point in security that health care gets with the surgeon general."

One of the surgeon general's best-known successes is found on the side of cigarette packages. The smoking analogy cropped up repeatedly with big thinkers. Once upon a time, society believed that if you chose to inflict harm on yourself by smoking, you were free to do so. The concept of secondhand smoke changed that equation and now smoking is anathema in many public places.

Networks are no different than smoking in the sense that your bad security habits can adversely affect innocent bystanders. Online, in fact, it may be worse since the secondhand smoke of cyberspace doesn't dissipate with time or space. It debilitates every machine it touches equally, as if everyone was forced to take a drag.

We propose a high-profile surgeon general for information security, who reports to the secretary of DHS. Imagine labels on software like those on cigarettes -- Infosecurity General's Warning: The use of software and hardware that is not certified secure can harm your system and other people's systems, and you may be held liable for those damages.

Wield sticks, dangle carrots

Recently, the U.S. Air Force, mired in patching hell, got what it wanted from Microsoft -- a more secure version of Windows, configured uniformly across the agency. Microsoft agreed to the deal, according to reports, because the Air Force had considered moving to open-source software. The Air Force CIO and security champion John Gilligan was quoted as saying at the time, "We want Microsoft focused not on selling us products but (on enhancing) the Air Force in our mission." He added that he hoped his agency's demands would spill over to other organizations that could take advantage of the secure configuration.

At any rate, Gilligan has a pretty big stick to wield (or carrot to dangle, depending on whether you're an optimist or a pessimist) to get what he wants -- a $500 million contract. But incentives as a Big Idea, to motivate others into better security, can be applied by anyone. Here are some of the incentives-based programs suggested to us:

-- Get a legal opinion. Christofer Hoff, CISO of WesCorp, says that users should require their vendors to have lawyers run software through the assessment mill and churn out a legal opinion on how its security would hold up in a liability case. Watch as the vendors scramble to make sure their software can pass muster.

-- Software Underwriters Laboratory (UL). Why not warehouse those legal opinions or other independent assessments with a UL-like organization. You wouldn't buy a $400 iPod if it didn't get approved by UL, but you'd buy a $4 million software system with no analogous security assessment?

-- If those Big Ideas take off, then watch as the insurance industry uses the data to adjust premiums. Vendors would instantly devote more resources to building better, which would result in lower insurance rates on their products.

-- File class-action lawsuits. It may come to this. Keeping with the smoking analogy, all it will take is a sufficient level of outrage and damage before enterprising lawyers -- who've already tried this -- successfully hold vendors accountable for poor software.

Treat end users like the dummies they are

Amoroso of AT&T believes that the fundamental security problem is that during the past decade, and quite unintentionally, the network's intelligence has migrated to the edge. "We're all sys admins," he says. And millions of end users holding sway over their security settings translates to millions of potential dumb configurations, boneheaded double-clicks and unintentional security lapses. Accidents happen, and bad guys take advantage of the fact that not all end users are created equal in terms of security.

After all, Amoroso argues, do you control power distribution around your house, or do you just plug stuff in?

He thinks AT&T can make a ton of money off this idea: Return control to the network providers (like his own company's phone system in the 1970s, he says, a time when Ma Bell controlled everything, including the technology's interface), and let the providers charge you for doing all of the filtering, traffic analytics, worm detection and incident response. "That's my solution," Amoroso says. "Create a service. Make money."

Becky Autry, CIO of the United States Olympic Committee, loves Amoroso's plan. "It's overwhelming; I'm overwhelmed," she sighs. Autry has a network staff of just three to handle IT for three training centers as well as events security. "Smaller organizations just can't get good or dedicated staff to handle a problem that's so large and changing so quickly."

Eliminate all coding errors within two years

Mary Ann Davidson, CSO of Oracle Corp. and champion of the quality coding movement, says she's tired of coders arguing that their jobs are too creative to eliminate errors such as buffer overflows -- that coding's an art, not a science. She applauds ethical hacking, where developers attempt to break software before selling it. Davidson says some schools now divide developer classes in two, a green team for writing code and a red team for breaking it. The application's relative security becomes part of its final grade. "Why isn't that standard development process?" she asks.

Davidson knows that, with billions of lines of legacy code and billions more in development, eliminating all coding errors is quite a lofty goal. But, "We need goals, right?" she says. And if doing that means limiting the freedom and creativity of coders, Davidson says, so be it. "We should be marching toward a realm where it's harder for people to create vulnerabilities. We need a revolution," she says.

Pry PCs from their cold, dead hands

Guns are dangerous; therefore, we license them. We give them unique serial numbers and control their distribution. James Whittaker says programmable PCs are dangerous, so why not treat them like guns?

"Let's make all end user devices nonprogrammable," he says. "No one can connect to the Internet on a machine that creates code. If you want a computer to do programming, you would have to be licensed. We could license software companies to purchase programmable machines, which would be completely traceable along with the code created on them."

That would blunt the information security problem -- suddenly all that intelligence at the edge of the network that Amoroso wants to pull back in isn't just gone; it's physically stripped. On the other side, new levels of accountability and liability are created through licensing developers and eliminating anonymity from coding.

Catch some bad guys

Time and again, security types bemoan the light sentences hackers get. If the penalties were harsher, perhaps people wouldn't be so fast to spread their malicious code.

But penalty is not a deterrent; arrest is. Right now, the bad guys know the risk equation is favorable -- that it's extremely unlikely they will be caught. A higher capture rate would dissuade them.

Creating higher capture rates has a lot to do with anonymity on the network -- or, more specifically, removing it. Many of the Big Ideas in this space propose less anonymity -- licensure, for example. Microsoft's Charney wonders what effect automatic traceback packets -- knowing quickly and reliably where data came from—would have. "It's an astounding thought," he says.

And then, he immediately comes up with the problems it presents. Traceback tells you where, not who. And privacy issues get thorny quickly. "Can you use the highway anonymously?" Charney asks. "No. But you also can't be stopped for no reason. More complicated than that, the Supreme Court has already ruled that you can't force someone to attach their name to political speech if they don't want to. So do you create an anonymous part of the Internet to ensure free speech? And if so, what stops bad guys from just using that?"

Still, if privacy issues could be worked out, and capture rates went up, attempted attacks would go down.

Call the cybercops

Part of increasing capture rates would have to include better policing. To help this, Bill Boni, CISO of Motorola Inc., has come up with the Big Idea of a cybersecurity version of Interpol. "The problem with existing collaboration on cybercrime is, it's episodic and it ignores the fact that investigation requires the significant participation of the private sector." With a "Cyberpol," you could license private eyes and forensic experts who not only would facilitate the cooperation but also would improve response time, as there already isn't enough law enforcement for cybercrime.

"Every railroad has its own police who don't have to call for backup if you're doing something wrong on their property," Boni says. "In Canada, law enforcement has simply outsourced white-collar crime investigation to licensed private investigators. The Mounties just said, We can't deal with it. You investigate, and if we need to be called in, then bring it to us."

A Cyberpol would facilitate international cooperation on investigations as well. That's key, as many virus writers live and work overseas, under the cover of fuzzy international law and law enforcement agencies with varying appetites for investigating cybercrime.

Unleash the power ofXML and metadata

Part of the problem of securing business online is that the risk is often invisible. In the physical world, visual clues exist to help us discern who's a legitimate merchant and who's a crook. We know which neighborhoods to go to and which ones to avoid.

Several people suggest using XML and meta-data to tag websites with safety, reputation, past performance and other security ratings to act as signposts for dangerous cyberneighborhoods. A virtual Better Business Bureau could manage the data so that when users visit a website, their computers pull down the XML meta-data about that site. The data might tell the browser to go ahead and load the page because this really is a bank's website, their reputation is good, and they use strong encryption and have appropriate privacy policies. At bad sites, the browser would simply deny the page load, thereby preventing a phishing scam or some spyware from being installed on the user's system.

Setting up that independent managing body to not only create the meta-data criteria but to manage it, too, would be a huge job. But it would protect us from our blindness to online warning signs in profound ways.

Dictate what software shouldn't do

Specs rule the development process. They dictate what a new software application should do, yet they rarely include what an application shouldn't do -- like run code by itself or allow anonymous access or allow the destruction of data because of bugs. What if, from now on, all specs documents were required to include antirequirements, such as a laundry list of common features, potential unintended consequences and bugs that the application must actively eliminate from occurring before the product ships?

Start a virtual Big Dig

In Boston in the late '90s, the main highway through town was rebuilt as a tunnel while the old road remained open. Engineers compared it to open heart surgery on a patient going about his business. It was called The Big Dig.

Humpty DumptyIt disrupted commuters some, took too long to complete, cost far too much, and the new tunnel leaks a bit. Still, as a feat of engineering, it mostly worked. One of the most radical and ambitious Big Ideas is to build a new, secure Internet parallel to the old one and, over time, move everyone over to the new network. A virtual Big Dig, perhaps part of our Manhattan Project.

Let's be clear: Internet2 is probably not this parallel network. Vint Cerf notes that the point of Internet2 -- which is an advanced network for the research community that can classify traffic and do other cool things the Internet can't -- is to become the sandbox for researchers that the Internet originally was, before it was consumed by the commercial sector.

Cerf himself has mixed feelings about a new parallel network being developed. "Boy, it's hard to tell how that would work," he says. "We're seeing things like overlays -- protocols and procedures that overlay the existing Internet and do networking in ways different than the Internet does it. Hey, the Internet itself was an overlay of ARPAnet." Gregg Mastoras, a senior security analyst at antivirus vendor Sophos, suggests that we could bifurcate networks so that there's a public network (like today's) and then a business network, for which you would have to register and agree to rules in order to be licensed to use.

There's no question new public networks would be monumental undertakings. Wolf at the NSA, for example, is part of the Global Information Grid (GIG) project -- essentially the DoD's effort to build a secure network for all of defense and intelligence to share. He gets to build security into this network from the beginning, exactly what would have to happen for a new secure Internet to be built. Version 1 of Wolf's Information Assurance plan for GIG was 3,600 pages and included requirements for 117 technologies in various stages of development.

But if an alternative secure network could be built, it would create a tectonic shift in security and tip the vulnerability scale in favor of the good guys. Even if it leaked a little.

Senior Editor Scott Berinato can be reached at sberinato@cio.com.

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.
Show Comments