DNS Prefetching: Loading the DNS of the domains needed before they are required

I just read this interesting article in the column ‘Recent Industry Blog Posts’:

I have heard for the first time that I can put a line with a domain name in the and the browser is then already searching for the IP address. As a consequence, there would be then no time needed for a DNS Lookup.
So, this would really make sense …

What I found out was that at least Firefox 3.5+, Chrome 5+ and the IE10 support this tag:

The IE9 supports this tag:

This means that I would need these 2 lines to have an earlier DNS Lookup of the ca. 40 % FF/Chrome users and the ca. 15 % IE9 users of our website … and maybe other browsers as well as I just read that Safari (15 % of our users) has this feature since the year 2010 (version 5.0.1) as well.

Looking at the test result of our homepage ( http://www.webpagetest.org/result/130118_D_R1R/1/details/ ) I can identify more than 15 domains that could/should be dns-prefetched.

The big question is now:

Would it make sense to use the dns-prefetch? Why is nearly nobody using it? According to the article ca. 0,5 % of the pages checked on Alexa.

If so, and I put 15 domains in it … this would mean ca. 2.000 letters if I put them in …

… and …

… which have ca. 50 letters each.

Would it have any negative effect for Google if I make the URLs on my website more similar by having the same lines in the head of all URLs?
Google says no, but I remember a while ago that this had an influence.

Or maybe only using a mini-version of using it, would mean that on our (only) subdomain http://aktienkurs-orderbuch.finanznachrichten.de we are putting a dns-prefetch for the domain http://www.finanznachrichten.de but nothing else?

I am keen on knowing your opinion!
Some additional infos about the topic:

If you can you need to cut down the number of domains even if the browser prefetches the DNS for them all then you’re still going to have the delay of making the TCP connection and waiting for the TCP window to open wide enough (though some sites will start with a window wide enough for 15KB)

Add 15 prefetch directives will add 2KB (ish) to the page, it’s got lots of repeated patterns so should compress well with to gzip.

Perhaps a way to start is to prefetch the DNS for those domains that are needed for page-render to happen and monitor the change in performance, then test whether it’s worth adding any more.

Most (modern) browsers will do a really good job finding host names to pre-resolve in their prefetch scanner. Where you will tend to see better benefits is if you have any domains that can not be discovered by looking at the html (i.e. any content injected by javascript or redirects).

Pat, so when I understand correctly, you think that most modern browsers already pre-resolve a domain xyz.com that is in the source code?
But a dns-prefetch of a domain that is e.g. in a JS file and that will load later might make sense?

Do you use dns-prefetching on your website?

Yes, Firefox, Chrome and IE9+ are all quite good at parsing the HTML itself and finding any domains referenced on the page and adding an explicit header won’t really give them any more info. For things like ads that may go through 3-4 different domains or content that is injected by javascript they can’t see in the preload scanner then you would basically be telling the browser about those domains.

I don’t provide any special hints on my site though I’m also not doing things like url-encoding small images, etc. I try to go for the best ROI on my time and some of those micro-optimizations aren’t worth the dev effort (maybe when nginx pagespeed is beta I might try having some of those automated).

I did some testing today (coincidentally) and found that prefetching did had a measurable effect on each of the browsers that supported it.

I can’t remember which browser I performed this specific test in, but the results for each configuration were the same - the DNS was resolved right after the HTML document completed.

without prefetch
with prefetch

I am prefetching for ghost.redstorm.com, rainbow.redstorm.com, ajax.googleapis.com, and www.google-analytics.com.

It’s not much of a performance gain, but it is one. And it was easy. :slight_smile:

Interesting. Those are some pretty painful lookup times when the browser didn’t prefetch (contending with the large downloads going on in parallel). Also looks like the browser didn’t do a good job with the prescanner - I stand corrected, nice work.

Which browser were you testing? Just wondering if I need to go ping the Chrome team to take a look :slight_smile:

Pat, with which browser would you recommend such a test?
And with which connection setting? Native?

I’d try Chrome, Firefox and IE since they all support it and have different preload scanners. If you get a reasonable gain for any browser that you have significant users on it’s worth looking at.

Any speed should be fine - something like DSL or Cable with some useful latency is better than Native.

pyronite - Can you please specify with which browser these tests were done?

Just a quick note:

The real use case for [font=Courier]meta dns prefetch[/font] isn’t desktop browsers running on good networks. It is smartphone browsers running on poor 3G network connections. 3G can often have atrocious performance characteristics.

Putting a list of hostnames to prefetch at the top of [font=Courier]html head[/font] allows reallife gains in this case. Take for example the classical placement of script tags, which is just above the closing [font=Courier]/html[/font]. This last chunk of HTML might arrive up to 200-300 ms later than the chunk at the top of [font=Courier]head[/font].

Thus placing [font=Courier]meta dns prefetch[/font] would in this specific case allow the browser DNS prefetcher to start working sooner.

You can clearly see it with the chrome browser.