Considering the majority (60%) of our visitors still use browsers that have a 2 connection limit per domain, i’ve been optimizing for those first. However i’ve start testing for the other end (40%) who use IE8 that allows for a max of 6 (for http 1.1) connections per domain.
Normally this would mean faster performance, however each test i’ve run ends up producing worse results: http://www.webpagetest.org/result/100625_eac89c32602af9c9c73da3eb624216f6/1/details/
If you look at the connection view its spacey like crazy. Why am i getting worse results? Is there a way i can maximize performance for both 2 and 6 connections?
I should add that the desktop version of pagetest says what you’d expect, the opposite. IE in IE8 mode (using web developers toolbar) tends to be quite a bit faster than in IE7 mode.
Want some fun? Try firefox 3.6 which bumped connections up to 16 - YIKES!
IE7 mode for IE8 is for the rendering engine, not the network connections (to the best of my knowledge).
One thing you might try doing (if possible) is moving that inline script that comes right after the css. Not sure if the browser would start downloading images earlier if it didn’t hit that script but it’s worth testing.
Aside from that the only other suggestion I have (if you feel like getting crazy) would be to deliver the images directly in the css (or in an external resource file). mhtml for IE and Data URI’s for the browsers that support them. It gets more complicated because you’re having to do per-browser logic but you’ll see some crazy-fast speedups as all of the images will be downloaded in one request (and without resorting to sprites - regular images).
Wow, didn’t know that, thats a tad excessive in my opinion.
I could have sworn it also changed the amount of concurrent connections. I’ll run another test later to be sure.
Which script exactly? The only thing i see is the lt ie7 conditonal statement, can those cause image downloading to be blocked? Keep in mind the link in the test goes straight to our old, currently live, site. On the old site there is some inline javascript right after the css, but it has been removed in our new site (the one in that test).
Yeah, i’ve read up on css data embeding, but considering it only works on IE8 (for the IE browser family), it is not worth the extra development effort to create an maintain such a system. I’m sticking with css sprites for now.
Or you can google:
“css data embedding”
“css data URI”
…etc
Actually, it’s not as well known but IE supports Multi-part documents (MHTML). Not sure how far back the support goes but I know at least 6, 7 and 8 all support it.
Not data URI’s but MHTML (MHT) packages - haven’t heard of a problem with sizes but you’d probably only want to do it with smaller images anyway (like thumbnails).
Yes, it’s a lot of work and not for the queasy (It’s one of the more extreme optimizations). There are solutions that can do it for you automatically but they are all pay solutions to the best of my knowledge. I know there are some solutions coming out soon where you don’t have to control the hosting though so that could be an option (I prefer controlling everything myself but that’s just me).
According to the documentation, it looks like few browsers support MHTML and those that do support it do not have a standardized way of doing so.
Opera will work version 9.0 onwards. Firefox requires a MHT plugin for users to be able to read and write MHTML pages. Safari does not support MHTML since version 3.1.1 onwards. Konquerer does not support MTHML since 3.5.7 onwards. Google Chrome took out the ability to view MHTML files since March 2010. It looks like it will work with Internet Explorer 5 onwards. However, IE may have problems saving MHTML files if they contain scripts.
Sorry, I wasn’t clear (and this is why it’s not for the fainthearted)…
To inline images you basically need to have 3 code paths depending on the user agent string:
1 - No embedding support - Reference the images as usual
2 - Data URI support - Embed appropriate images as data URI’s
3 - MHTML support - Embed or reference the images in a MHTML package
StrangeLoop has an accelerator appliance that can do it automatically (along with all of the other best practices and a few other extras). They also just released it as a no-install in-the-cloud version: http://www.strangeloopnetworks.com/products/site-optimizer-service/
Looks like ADC is the app accelerator-side of a CDN (Application delivery network - Wikipedia). I had to Google for it though (which is probably a good indicator that they need to explain a bit more).
The main alternatives in this space that I’m aware of are:
There are also a bunch of software solutions that handle bits and pieces (like W3 total cache for wordpress, etc) but Strangeloop is the only one I’m aware of that inlines the images. They’re also the only cloud-based solution that I know of (Aptimize and AcceloWeb require access to the apache config itself or a dedicated box).
AcceloWeb does image inlining, both using data:uri and MHTML.
AcceloWeb runs on the cloud as well and while one of its configurations is on a dedicated physical machine, there are other cloud-based configurations as well.
I would love to see some examples of the MHTML and data:uri concepts.
From my understanding, both of these actually embed images within the main document.
If that is correct, the download time each image is added to the main document.
So basically the difference would be the DNS lookup and / or time to first byte would not exist for the images since they are embedded.
While the main document downloads, it is possible to have the images downloading as well and in parallel.
I am not seeing how the MHTML or data:uri concepts would make the page faster except if the time to first byte or dns lookup time for the images is huge.
There is actually another advantage to these techniques and it’s bandwidth utilization. During the time a browser downloads multiple images, you have to take into account “dead” times when the connection is taken but nothing is downloaded yet (during the time it takes to connect to the server and the time it takes the server to send the first byte). Also, it takes the browser time to parse the HTML (and it is delayed even longer by blocking objects such as script tags. If you look at the waterflow chart of a typical webpage, you’ll see many “holes” when the bandwidth is not utilized well.
Combining the images to one big container, solves this problem because while this container is being downloaded, the bandwidth is utilized well.
Also, you talk about time to first byte as if it is neglectable.
If you take a typical media rich web page, it has about 100 images (remember that all the background images are images too and every round corner is an image). Assuming you don’t use a CDN (and even then, depends on how distributed the CDN is), you can get network latency of about 100ms when accessing US based sites from other locations in the US. For international locations, it’s even worse. Also, the server needs time to process every request. This is relevant even for static images, as the server has a queue of request and the busier it is, the more time every request spends in the queue. Thus - you can easily add 20-50 ms of additional delay for every resource.
Now, if we do the math: 100 [images] * 100 [ms] + 100 [images] * 50 [ms] = 15 sec.
Now, of course some of it happens in parallel because the browser maintains several simultaneous connections, (and we can also talk about primed cache and less content-rich sites) but you can see that the sum of the penalties that you pay by not combining images is measured in seconds and this is a substantial penalty to pay.
Yeah, what Leonid said (and sorry about the mis-statement on the cloud support - great to hear it is available).
The connection and request times are almost always what makes a site slow. If you got to ~slide 113 in Strangeloop’s deck you can see the impact for the Velocity web site with combining (though it included css, js and images in one step). The page load times went down from 8.3 seconds to 3.4 seconds. It is the same reason people use sprites but the Data URI’s and MHTML support is more flexible and works for other images, not just background ones which makes it really useful for things like product thumbnails.
In the case of MHTML you can reference them in an external file so you can group the images or put them all in a single external file that can be downloaded in parallel with other activity (or, more importantly, cached). Data URI’s aren’t quite as flexible but they can be inline in the external CSS files.
It’s not the first optimization I’d recommend but once you have tackled the rest of the low-hanging-fruit it is one of the optimizations that can still have a large impact on the remaining page load.
Also, when considering combining and inlining any resources (images among them), you should also consider the downsides.
The two major downsides are:
They are difficult to maintain. Every time you want to change your page (other than small text changes), you might have to generate the combined/inlined resource over again.
They might harm the efficiency of browser’s cache. Every time you change a single resource in the combined/inlined container, the browser will have to download the entire container all over again, instead of just downloading the changed resource.
Quick question patrick, did you see anything once you connected to the new site that may be causing IE8 specifically to be behaving slower than IE7? Any input would be appreciated, find it annoying i can’t seem to get it running fast for both.