Button to scroll to the top of the page.

News

From the College of Natural Sciences
Font size: +

Securing the Cloud

Securing the Cloud

The future of the Internet could look like this: The bulk of the world’s computing is outsourced to “the cloud”―to massive data centers that house tens or even hundreds of thousands of computers.

cloud_security_screenshot.jpgThe future of the Internet could look like this: The bulk of the world’s computing is outsourced to “the cloud”―to massive data centers that house tens or even hundreds of thousands of computers. Rather than doing most of the heavy lifting themselves, our PCs, laptops, tablets and smart phones act like terminals, remotely accessing data centers through the Internet while conserving their processing juice for tasks like rendering HD video and generating concert-quality sound.

What needs to be figured out for this cloud-based future to emerge are three big things. One is how the computers within these data centers should talk to each other. Another is how the data centers should talk to each other within a super-secure cloud core. The third is how the cloud should talk to everyone else, including the big Internet service providers, the local ISPs and the end-of-the-line users (i.e. us).

This last channel, in particular, interests Michael Walfish, an assistant professor of computer science and one of the principal investigators of the NEBULA Project, which was awarded $7.5 million by the National Science Foundation to develop an architecture for making the Internet more cloud-friendly. If we’re going to be trusting so much of our computing lives to the cloud, he believes, we need to develop a more secure model for how information travels.

“A sender should be able to determine the path that information packets should take,” says Walfish. “A receiver should not have to accept traffic that she does not want. An intermediate provider should be able to know where the packet's been and should be able to exercise its policies about the downstream provider that’s going to handle the flow next.”

Walfish’s system for providing such capacities, which he’s developing with colleagues at Stanford, the Stevens Institute of Technology, and University of California-Berkeley, is called ICING. It’s a set of protocols that allow every packet of information not only to plot out a path from beginning to end, choosing every provider along the way, but also to establish a chain of provenance as it goes that proves, to both the intermediaries and the final recipients, that it came from where it said it was coming from.

“What we do is take a packet, a unit of data, and we add some fields to the head of the packet,” says Walfish, who in 2009 won an Air Force Young Investigator Award for work related to ICING.

“These fields contain enough cryptographic information to be able to communicate to every realm along the way, and back to the sender, where the packet's been. So when a packet shows up, I know where it’s been. I know whether it obeys the policies of everyone along the path. That property does not exist today.”

The advantages of such knowledge, says Walfish, should be considerable. Senders, for instance, could contract with intermediate providers for a kind of expressway through the Internet. Recipients would have an easier time sorting their incoming traffic into different levels of priority depending on the routes the packets took.

mwalfish-crop-large.jpgPerhaps the greatest advantage of adopting a system like ICING, says Walfish, would come in the area of security. Targets of various kinds of Internet attacks, like denial-of-service attacks, would be able to sever traffic from their attackers faster and with much greater precision. Governments would be able to set up channels of communication that pass through only well-vetted and highly-trusted service providers. Internet security companies could, from anywhere in the world, inspect your traffic for viruses.

“Right now,” says Walfish, “there are ways to deal with attackers, but they’re crude, and they’re reactive. Once the traffic enters the victim’s network link, you’re hosed. All you can do is shut it all down. It would be like if you had a huge line of people coming into your office, not letting you get work done. You could kick them all out, but you still wouldn't get any work done because you’d spend all your time kicking them out. What you really need is for them to not show up in the first place.”

ICING, says Walfish, would also prevent “IP hijacking,” a kind of attack in which a network provider redirects net traffic by falsely “advertising” to hold a given IP address or by claiming to offer a more direct route to that address. Such IP hijackings can be globally disruptive. In 2008, for instance, the Pakistani government sought to block videos containing the controversial Danish cartoons that depicted Mohammed. The result was a global shutdown of Youtube for more than an hour. Last year, it’s believed, China Telecom was able to capture 15% of the world’s Internet traffic, for 18 minutes, by falsely claiming to be the source of more than 30,000 IP addresses.

“There are multiple reasons why this wouldn’t happen in ICING,” says Walfish. “First, in ICING, the contents of the advertisement and the name of the advertised destination are tightly bound; lie about one, and the other looks invalid. Second, because packets must respect policy, a packet taking an aberrant path will be detected as such.”

ICING, and its parent project NEBULA, are one of four multi-institutional projects being funded by the National Science Foundation’s Future Internet Architecture (FIA) program. The point of the FIA program, and of the efforts of Walfish and his colleagues, is to step back from the day-to-day challenges of managing the flow of information on the ‘net, and think more fundamentally about what kind of architecture the Internet should have going forward.

“Where ICING was born, I think,” says Walfish,  “was in the realization my teammates and I had that while there was a consensus about what kinds of things needed to change, and there were  a lot of proposals to make those changes, all the proposals seemed to be mutually exclusive. They all required the same space in packets. It would be like if your bike was out-of-date and someone said, oh, you can get this really cool feature if you just replace your front wheel with this wheel, and then someone else came along said, oh, you can get this other really cool feature, but you have to replace your front wheel with this wheel. Well, you can only have one front wheel. So what we set out to do was to design a much more general-purpose mechanism where you could get all these properties without their conflicting with each other, and that's what I think we’ve done.”

Scientists Partner With Leading Plastic Solar Cell...
New Susceptibility Gene for Skin Cancer Discovered...

Comments 3

 
Guest - Anonymous on Wednesday, 09 February 2011 11:16

Expressway? So what are the implications of ICING for net neutrality?

Expressway? So what are the implications of ICING for net neutrality?
Guest - Michael Walfish on Monday, 14 February 2011 16:50

ICING is an architecture and a mechanism; as such, it is agnostic about
net neutrality, just as the current Internet architecture is agnostic.
In the current architecture, providers can and do give preferential
service to flows (or drop them) based on the endpoints of the flow and,
in some cases, based on the content of the flow. Under ICING, in
contrast, a sender can choose a path through the network that avoids the
discriminating provider; conversely, if a sender gets a provider's
approval to carry traffic along a path, and then the provider later
reneges, the sender has a proof of the approval, which creates a
foundation for contracts and legal action.

To answer the original question, both the current Internet architecture
and ICING permit a space of policies (though the space is richer under
ICING), and network neutrality corresponds to eliminating some of those
policies. The authority to eliminate policies this way derives from
laws, which can apply to either the current architecture or ICING.
Without regulation, what would happen? We don't know, but at least ICING
allows more choice for endpoints.

ICING is an architecture and a mechanism; as such, it is agnostic about net neutrality, just as the current Internet architecture is agnostic. In the current architecture, providers can and do give preferential service to flows (or drop them) based on the endpoints of the flow and, in some cases, based on the content of the flow. Under ICING, in contrast, a sender can choose a path through the network that avoids the discriminating provider; conversely, if a sender gets a provider's approval to carry traffic along a path, and then the provider later reneges, the sender has a proof of the approval, which creates a foundation for contracts and legal action. To answer the original question, both the current Internet architecture and ICING permit a space of policies (though the space is richer under ICING), and network neutrality corresponds to eliminating some of those policies. The authority to eliminate policies this way derives from laws, which can apply to either the current architecture or ICING. Without regulation, what would happen? We don't know, but at least ICING allows more choice for endpoints.
Guest - us Web listing on Monday, 16 July 2012 18:37

Great thankies for the article post.Real superficial presumptuous to scan author. Really Unfriendly.

Great thankies for the article post.Real superficial presumptuous to scan author. Really Unfriendly.
Already Registered? Login Here
Guest
Wednesday, 25 December 2024

Captcha Image