SPeeDYing up the web: one small protocol at a time !


Google recently proposed a new protocol for web called SPDY that would practically replace HTTP as the underlying application level protocol that web clients and web servers use to communicate with each other.


Although SPDY still uses much of HTTP it has a different connection philosophy, header compressions and encodings than the normal plain vanilla HTTP. The idea is to speed the web and make it faster, a noble idea indeed and to do it in such a way that it can be adopted successfully without requiring drastic infrastructure changes. I would like to address in this post why the idea is likely to fail and not be successful.

Google claims that in the lab setup SPDY speeds the downloading of top 25 websites by up to 55% , but I would like to show why the data they have put up paints a different picture.

Firstly let us understand how this is supposed to work broadly: the user agents or browsers will need to change and be able to support both SPDY and HTTP. Google has released code of a SPDY supporting version of Chrome as a first step in this direction.

But it is no good to have a client to support SPDY, if the servers do not support it. Thus, all or majority of servers will have to support SPDY and become SPDY servers. Now not everyone (each client and server) will adopt the SPDY overnight, so we will live in a world where some clients support SPDY, some HTTP and some servers support SPDY and some HTTP.

But that is not possible, rather all servers supporting SPDY will also have to support HTTP servers to take care of legacy user agents and all SPDY enabled browsers will also have to support HTTP to interact and render content from legacy servers.

This complicates the fact that for legacy HTTP traffic things become worse. The SPDY client will first initiate a SPDY connection or session on a different port and when refused connection only then initiate a traditional HTTP connection on port 80. Or maybe the port TCP port 80 remains unchanged , but only HTTP1.X is used to distinguish SPEDDY from traditional HTTP like HTTP 1.1; but as SPDY is not backward compatible I foresee at least 2 requests being made before one realizes that normal HTTP and not SPDY needs to be used.

I am stressing this because the results Google has published is for wither SDPY or either HTTP; what will give a better picture is load test under conditions where only a minor amount of traffic is SPDY and most is still HTTP. Load results are also important because they will also enable us to gauge the effects of parallelization of requests on server. The parallel streams within a SPDY session may reduce latency to client, but if the overhead on servers is significant and it leads to more CPU consumption on server side that is going to affect client experience too in server load conditions.

SPDYing up the web is not a one dimensional problem of reducing latencies. There are tradeoffs involved like what would be the effect on servers, on network traffic etc.

Secondly they plan to use SSL for transport which adds additional overheads and latency. They haven’t provided results for multi-domain SSL SPDY which would be the most general case and I suspect that is because the results were not encouraging.

Choosing SSL and TCP over SCTP seems strange given that if they are thinking that they can influence all web admins to change their servers they could perhaps dream bravely of influencing infrastructure providers to support SCTP stacks. Also SSL has problems with caching (which they acknowledge) and with caching compromised the web speed would be again be adversely affected and experienced.

Another surprising finding digging through their results was the fact that speed improvement in case packet loss is assumed 0% is only about 10%. As packet loss increases the SPDY benefit increases. Perhaps the benefits they are seeing is mostly from TCP fast retransmit that they are using in SPDY.

The practical problems of convincing end users, web server admins or infrastructure providers to change something and adapt something better, is definitely there; the bigger problem is to convince skeptics like me that the new protocols will indeed lead to substantial savings in speed and latency. That can be done only by being truly transparent. I know that SPDY is very preliminary and its good Google has opened it to the developer community at large, but reinventing the web (protocols) requires much more thinking and due diligence than can be done behind closed doors (even if the closed doors are of Google).

Opening up SPDY is the right thing, but creating the hype around 55 % increase in speed is perhaps not the right thing. Hopefully the hype enables web pundits to think more deeply about how to make the web ‘actually’ faster.

  1. Arun Prabhudesai says

    Hey Sandy, Probably the most technical post this blog has seen :) . However, I agree with you – getting SPDY to be accepted by masses is going to be extremely difficult. Just take an example of IE 6 – even after everyone bashing it and knowing how bad a browser it is – 60% of the traffic on the web still uses IE6.

    Having said that – Its Google and they have done many things that seemed impossible earlier. Lets wait and watch whether they succeed in this.

Leave A Reply

Your email address will not be published.

who's online