HTTP 1.1協議原創作者Roy Fielding對Google SPDY協議的評論

李錕發表於2012-06-26

http://www.simple-talk.com/opinion/geek-of-the-week/roy-fielding-geek-of-the-week/ RM: As part of the Let’s Make the Web Faster initiative, Google is experimenting with alternative protocols to help reduce the latency of web pages. One of these experiments is SPDY (an application-layer protocol for transporting content over the web, designed specifically for minimal latency. Can you see SPDY playing a role in addition to HTTP as a next-generation protocol?

RF: SPDY is an ongoing experiment in various protocol designs. It may get to the point where it is a serious alternative to HTTP, but right now SPDY suffers from a myopic view of protocol development. Latency is an important design concern, but the best way to improve latency is to not use the protocol at all. In other words, use caching.

SPDY can only be an improvement over HTTP if it works at least as well as HTTP for layered caching. However, the designers seem more interested in limiting the protocol in other ways, such as requiring encryption and tunnelling through intermediaries. If that continues, then I think SPDY will only be of interest to authenticated services that don't want shared caching and can afford the infrastructure demands of per-client long-term connections (e.g., Google's web applications).

It is certainly likely that something similar to HTTP+SPDY will eventually replace HTTP/1.1 as the primary Web protocol. There is simply too much inefficiency in the HTTP/1.x wire syntax for it to remain the dominant protocol. I started working on one such alternative back in 2001, which I call the waka protocol, but I chose not to use a public standards process to develop it.

RM: Is one of the problems with SPDY is it might hammer servers for resources faster than the current browser protocols so some servers already operating near capacity will be easily overloaded and need more hardware?

RF: Keeping in mind that SPDY is still very much an experiment, the current design is not amenable to layered services. In other words, it is too hard for intermediaries to look at the messages and quickly determine which server should handle the request, which is something that is essential to all Internet-scale services. I suspect that the Google engineers are already being taught that lesson by their operations folks, so it wouldn't surprise me if the design changed substantially in the near future.

以下為 Roy Fileidng 的觀點: http://lists.w3.org/Archives/Public/ietf-http-wg/2012JanMar/0970.html I've never considered SSL to be a means of securing the protocol. It does a decent job of hiding the exchange of data from passive observers, but the way that typical user agents handle certificate management lacks what I would consider a secure protocol.

In any case, the notion that every user wants a secure protocol is irrelevant. There are many examples of HTTP use, in practice, for which SSL/TLS is neither desired nor appropriate. Even simple things, like the exchange that Apple devices use to discover network access point logins, cannot work with an assumption of SSL/TLS. Likewise, many uses of HTTP are in kiosks, public schools, libraries, and other areas for which your concern as a user is less important than the organization's responsibility to prevent misuse.

There are ways to have both a secure protocol and visibility for intermediaries, but we don't have to agree to any of these "requirements" up front. If the protocol proposals can't stand for themselves, then I have no need for a new protocol.

....Roy

以下為SPDY協議作者 Mike Belshe 的觀點: http://lists.w3.org/Archives/Public/ietf-http-wg/2012JanMar/0971.html We're at a crossroads here, which comes down to goals.

One one hand, we have the opportunity to help users have secure access to the web, always.

On the other hand, we can continue to allow websites to be unsecured, creating privacy and security risks for users.

I challenge you to find a single user on the web that wants an unsecured Internet. I don't think these users exist. Everyone wants privacy and security; and most users don't realize they don't have it. I know there are websites which want to minimize their capital expenditure costs, even if it puts their users at risk. We could cater to the websites - or we could cater to the users.

Which is more forward looking? Which road do you want to take?

In all other products, security is not an option. It's a requirement. Users expect it and users need it. How can anyone seriously argue to not even try in our protocols? If not now, when would you argue to start trying? Never?

Two last things.

First, whatever we define today takes years to deploy. CPU costs continue to go down at a remarkable rate. Moore's law over the last decade has definitely made SSL viable on cheap hardware like never before. We should be designing for going forward - where CPU costs continue to shrink. If a website is willing to put its users at risk (which is downright irresponsible if you ask me), let them use HTTP/1.1. The future should be secure.

Second, SSL will remain substandard on performance until we make it a requirement. Its a self-fulfilling prophecy that SSL implementations are slow as long as we don't try to take them seriously.

Mike

PS - if you don't believe me about the importance of security, look to all the major content providers for social activity today. Google and Twitter are already 100% SSL. Facebook and Microsoft are not far behind. Users need security and they need it now. We should stop talking about it as though its optional.

A better argument would be "is SSL the right way to secure the web?", and not, "should we secure the web?".

先挖個坑,以後再解釋。

相關文章