Content download time relatively high

Hi,

I was looking at http://www.webpagetest.org/result/150904_G3_E6F/2/details/ to improve the startrender and my attention went to the relatively high looking content download times of the first css/js files, also the external ones.

I was wondering if someone could explain to me what influences these download times (in most cases). On the basis of my own research / knowledge I came to the idea that maybe the initcwnd could be the problem? Because the initial packages sent are too small?

(Tuning initcwnd for optimum performance - CDN Planet)

Love to see your additions / ideas (for this particular website or in general).

Joyful regards,

Daan

I would look to try to merge all the JS files into a single file and do the same for the CSS. You’ll get a fairly big win there by doing that.

Not only will you be reducing the number of http requests required, but also enabling the http compression to compact the files down to a smaller size as bigger files typically have a higher compression ratio overall so you’ll see more efficient compression to boot.

Given that you’re using nginx over SSL, I recommend that you enable spdy. It’s only a couple of lines in the config!

If you do that, then it would be a good idea not to load up standard files ( jquery, etc ) from third parties, but to do so from your own site.

However, in the end, you’re downloading 750KB over 85 files in under 2 seconds over a 5mbit link. A calculator tells me that’s 6mbit! If you want to see what your server is capable of, use the native connection…

+1 to SPDY (or better yet, HTTP/2). You might need a reverse proxy in front of nginx like H2O but it will make a world of difference because it will eliminate the dead time setting up the new connections.

Also, spend some time here: https://istlsfastyet.com/

Particularly watch the video from velocity linked at the end of the page (and the slides). There is a ton of tuning that needs to be done to eliminate round trips from the TLS times.

If you look at the bottom of the waterfall there is a bandwidth utilization chart and while it does eventually ramp up and saturate the link there is a lot of dead time at the beginning that can be tuned.

As GreenGecko mentions, you’ll also want to move the JQuery UI code to your server.

Thanks a lot guys! I’ll definitely bring these tips into practice! Jquery on own server + merging seem like good quick wins. I’ll check the TLS best practices as well!

I am anxious to see what the SPDY / HTTP2.0 protocol can bring in terms of results. The best practices that it brings sounds interesting enough to say the least.

The hosting provider of our webdesign company is actually discouraging us from using http2.0 over Apache on their servers (different project than the webpagetest above), because there is lack of support? Do you guys know how much truth there is to that in general?

My own searches seem to suggest that http2.0 is ready to be implemented if you’re up for it… so I’m not sure that they serve us the best advice.

Chrome, Firefox, Edge and IE on Windows 10 and Safari 9 all support HTTP/2. Safari 8, IE 10, Chrome and Firefox support SPDY though it is deprecated and being phased out.

Google, Twitter and Facebook all have it deployed at scale and have been running some flavor of it through development and yes, it is very ready on the client side.

Server support is a lot spottier as none of the major web servers support HTTP/2 natively (or if they do it is very recent and probably not very shaken out yet). Usually you’ll deploy it as a reverse proxy in front of the web server. The best implementation I’m aware of currently is h2o: https://h2o.examp1e.net/