resouces loaded sequential instead of paralel with chrome and spyd


I have some weird results that I don’t understand.

The test shows that all resources (jgp, webp, css) are loaded after each other and almost no parralel downloads. That is, that they do start simultaniously (start offset) and then have a long time to first byte. Is the time to first byte waiting on the download of the previous resources or something?

I don’t know if the webpagetest makes use of SPDY, the webserver checks the client if it’s spdy capable and then sets a head for the backend server which has mod_pagespeed. If the client is sdpy enabled all resources go over one domain. If not, then it’s sharded across 3 hostnames.

Disabling mod_pagespeed does not help, same issues. Same applies to removing varnish caching from the chain.

If I test with my client at home, with a 10Ms latency the site loads in about 300Ms total. The webpagetest were done without bandwidth limitations (native connection)

I checked if it would be a result of latency amplication, eg more latency the bigger the problem. I checked all equipment for duplex mismatches and none were found.

Some webpagetest servers I used have a latency of 1Ms to the website, they also show simular results. Why these defer from my home situation? Running test with IE seems (with regards to the waterfall) doing better, having the domain sharded.

However, on the sharded setup I have TLS negotiations on a whole lot of resources, not just 3 times (for the 3 hostnames in the sharding), but I guess that is due to the opening of new connections.

The results can vary a lot on different tests while not changing the servers.

Any suggestions?

WebPagetest launches the actual browser so if the browser supports SPDY (Chrome, Firefox, IE 11) then SPDY will be used and that’s pretty much what it looks like. When you test at home is it with a clear browser cache (just making sure all of the resources are still loaded)?

The Dublin location is running on Small AWS instances (I believe) and it looks like you’re CPU constrained. If you test from the “Dulles Thinkpad” location then it will run on reasonably modern actual hardware (3rd gen i5’s) - though it will have more latency :

Even on the faster hardware it still looks like the individual resources take longer than you’d expect. Having them serialized isn’t unexpected as long as the server isn’t waiting on anything - it will usually write to it’s local buffers until they fill and as long as it can keep the pipe full then serialized should be just as fast as parallel/interleaved. Some SPDY implementations will round-robin at a given priority level but I don’t know what you’re using (is it mod_spdy?).

I’m using mod_spdy ( and mod_pagespeed. The setup is like:

[internet] > firewall > |apache, mod_pagespeed > varnish cache | apache, mod_spdy | mariadb galera multi master|

Where mod_pagespeed uses memcached and many functions on the site get their data from memcached aswell so the access to the DB servers is minimal.

Latest test results @

Some improvements due to disabling some apache modules and a change in the order in which CSS and JS files load.

To my understanding the green bars are bad, looking at the offset the client (webpagetest) want to download but waits either on other downloads to finish or maybe due to a server configuration issue that does not start with that transfer.

I would expect a couple of short green bars, then a couple of blue ones below as in simultaniously downloading a number of images.

The green bars aren’t necessarily bad in the case of SPDY - that’s more of an issue for HTTP 1.x conncetions where only one resource request is in flight at a time. The longer green bars are just long because other resources are already filling up the pipe - it’s not idle.

From the looks of it, mod_spdy is serializing the responses so you won’t see them in parallel. That’s not necessarily bad (though for progressive JPEG images it would be nice to have the delivery interleaved).