Is it possible to run ENTIRE site (not just images, etc) via CDN?

Typical CDN’s actually keep the TTL at 60 seconds so you’re actually in better shape than most :slight_smile:

I may be remembering it incorrectly. The TTL may be much shorter. It’s a bit disappointing, but I don’t thing there is anything I can do about it at all :frowning:

By the way, Patric, any idea why some test locations give me F and others A for “Cache Static Content”? The headers are identical. :huh:

Looks like it is considering your base HTML as static since you now allow that to be cached by the CDN and the CDN is sending down a “time left” from when they cached it. In the F case the HTML only has a few seconds left but in the A case it has expired and the remaining time is zero (so I consider it dynamic).

It’s an edge case that you really shouldn’t worry about :slight_smile:

That would make sense, however, for some reason when I re-run the “F” test, I again receive “F”, and it keeps showing “2 seconds” no matter when I run the test.

It’s not an issue for me, I only wanted to let you know so you know about it. It’s likely something no one else is going to experience the same, so likely not worth investigating. :slight_smile:

Marvin, that seems to work really well !

I’ve been wanting to do this too eventually in the same way. You even selected the same anycast DNS-provider and CDN (EdgeCast ?) I wanted to use.

And I was wondering how do you tell your CDN not to cache the HTML for logged in users ?

It looks to me like you are telling the browser to always revalidate for pages and you have the CDN setup so that when the bbloggedin cookie is set to 1 the browser will always get a file from the origin server.

Is that correct ?

I think many other sites would use atleast an other domain for static files, because the cookie would prevent the static files to be cached at the CDN.

But the number of static files on your site is really low and should be cached before login so it should be fine :slight_smile:

The above screenshot is of a 1st and 2nd test run using the Sydney, AU test location. It illustrates the issue with the short DNS TTL of my CDN (or all CDNs?). In this case, the long DNS lookup time almost doubles the load time of my home page. I don’t think there is anything I can do, and I don’t think CDN providers can change the TTL. :frowning:

I know the DNS looks bad, but I think you have to remember that the test location in the AU is inside a Hosting provider network. The information about the CDN-nameservers isn’t generally cached, that would be different on a access provider network.

The Edgecast nameservers have a TTL of 2 days, the CNAME has a TTL of 1 hour.

If you do 2 tests, with a little over an hour between them, you should see a more realistic view of what a user on an access provider network would see.

The CNAME is probably your pain point. Who are you using for your DNS and do you know if they have a wide server footprint with Anycast for the DNS?

I assume once you get comfortable with running the base page through the CDN you can bump up the CNAME TTL which will help but unless your DNS provider has servers in AU it’s probably still going to take a hit every now and then.

Pat, TTL for my www CNAME is currently 86400. I’m using DNS Made Easy (Anycast). But the test results I’m referring to are those where the base html file has been cached on the CDN, so, if my understanding is correct, the CDN sends the file without even needing to check with my server (but I may be incorrect here).

In any case, would the blue line in the waterfalls represent the combined DNS lookup time for both, the 1. CDN and 2. the origin server? Or would the blue line represent just the DNS lookup time for the CDN?

Lennie, thanks for the explanation. It makes me feel a bit better :-), and also I will have to accept that there are certain things beyond my control.

Yes, it is the combined lookup. The browser requests the www and the ‘recursive dns server’ will usually return both results as one response (so including the IP-address and CNAME information).

A recursive nameserver would need to resolve these things if the cached is completely empty:

At any provider [>] would already be cached. At an access provider, the ones with [*] would probably also already be cached.

I noticed you only specified 4 out of 6 nameservers at your domain registrar, I don’t think it matters much but it could have some performance impact some of the time.


Maybe you could try the tests I suggested above ?

I was thinking maybe you should test 3 times:

  • first time, maybe nothing is cached and it maybe take a long time to resolve the DNS name
  • second time, shortly after. (hopefully the request is send to the same nameserver). It should be fast, because it is already cached.
  • third time, a little over an hour later. Only the CDN CNAME TTL should have expired and this is probably the resolve time most users would see.

Lennie, thanks for the explanation. That makes sense that it is a “combined” DNS lookup. Why did I think otherwise? LOL!!!

I’m going to stop agonizing over things I cannot improve. :slight_smile:

Here’s a good place to test DNS lookup times from various cities simultaneously:

http://just-dnslookup.com/index.php?vh=www.laptopgpsworld.com&c=&s=dns+lookup!

The first run shows very good times in about 50% of the locations. And perhaps 30% are very very slow. On the second run, most of the “slow” ones are fast. I’ll try to test after about 60 minutes.

Lennie, I forgot to comment on the following:

My registrar allows for only 4 entries :frowning:

Edit: Hmm, perhaps I can email them about it.

Marvin,

I expected that to be the problem. If your registrar doesn’t allow more than 4, you should ask DNS Made Easy about which 4 servers you should specify. Because I think some of them are in the same network anyway.

BTW I’m playing around with Edgecast CDN and what I was asking before are using their HTTP rules engine to force page requests to the origin server based on cookies ?

Lennie,

  1. Thanks, I’ll talk to my registrar (and DNS Made Easy, too, if needed)
  2. Yes, I did it based on a cookie

Just to see how they all fare. I tested a number of anycast DNS providers. Basically, they all do better than your Hosting provider (that is what you would expect).

But they all suck somewhere, in some country, on some networks. :frowning:

http://www.dnspark.com/
http://www.dnsmadeeasy.com/
http://dyn.com/
http://www.ultradns.com/
http://www.dns.com/
http://www.akamai.com/

None of them got it right everywhere (not even counting deep down in Africa or South America).

Basically, you can pay 30 or 40 or 50 dollars a year at dnsmadeeasy, dnspark or whatever and you won’t be much worse off than paying that same amount per month. UltraDNS was the winner for today, they did ever so slightly better. The results will be probably be slightly different tomorrow.

Lennie, I went based on price. I can afford 30 or 40 or 50 per year, but I cannot justify the same per month, as I only have a single site, and it is just a hobby site.

By the way, my registrar added the two remaining (missing) name severs to the list. I cannot tell the difference, but as you said, it may help occasionally.

Would you care to look at the following DNS “health check” http://www.intodns.com/laptopgpsworld.com - there’s one item highlighted in red color. It is about “Recursive Queries”, and I have no clue if to worry about or not.

It really sucks when my entire home page loads in 692 milliseconds, and out of that, 371 ms is just for DNS lookup! More than 50% of page load time is just for DNS lookup!

http://www.webpagetest.org/result/120126_WS_d42cac40d6ebbf967ce0a86a8bf70d68/

Isn’t there really a better way?

Unfortunately you’re starting to get into the guts of the Internet plumbing and there is only so much that you can control. If your CDN uses anycast and is willing to let you use an IP address instead of a CNAME then you can configure your domain as an A record that points to their anycast address directly and give it as long of a TTL as you feel comfortable with. Combine that with a good DNS provider and that is the absolute best you are going to be able to do.

There is a lot of work going on elsewhere to try to help though:

  • Google’s DNS resolvers will automatically lookup names that are in it’s cache and about to expire so it will always have a response available for users. Hopefully other resolvers will do the same but until then this requires that end-users configure their DNS to use Google’s resolvers (8.8.8.8 and 8.8.4.4 I believe)

  • Browsers are getting really aggressive about pre-fetching DNS and even pre-connecting. This is going to depend on how users get to your site though. In Chrome, users that come back frequently will have your host name resolved soon after starting the browser and may even establish connections while they are typing in the address (if it isn’t a bookmark).

  • If users come in through a search engine, things are improving there as well. Most search results in Google will have rel=prerender set for the first result if it’s likely that the user will click. In Chrome it will actually fully load the page in a hidden tab and swap it in as soon as they click (in your case the page would be fully rendered).

Be the next Google or something and build your own network :wink:

Seriously building out a network and having servers in all those locations takes time, expertise and money.

But I’m really surprised by some of the results, I would have expected it to be better. 300ms+ is not great, not that it is terrible either.

That is the time it takes to send a query and reply from Australia to Europe, obviously multiple queries are involved, but still. That is a lot more than I expected.

I don’t remember which one, but one of the DNS providers really sucked on the network in Vancouver, Canada.

Australia I can kind of understand, but even if the DNS provider doesn’t have any nodes the in west of Canada you would expect it to be better than that. Being that close to the US for example.

Anyway.

There is only one way to improve it:

  • create a beacon which reads navigationTiming (in IE9, Chrome and Firefox)
  • put it in your site
  • find large access providers networks from where DNS-resolution is always slow
  • point it out to your DNS provider, ask them to find a way to do BGP-peering with that network
  • most of the time wait months for them to build out their network (unless they are already on the right Internet Exchange as happends a lot in Europe)

Pat & Lennie,

Thanks! It’s great to know there are still options to be explored. :wink:

Let’s assume they would let me use an IP instead of a CNAME. I’m not sure how, if at all, this would work in my configuration.

  • Currently my A record at DNS Made Easy points to my dedicated IP at my hosting company
  • www CNAME is pointed to wac.xxxx.edgecastcdn.net.
  • Edgecast pulls from laptopgpsworld.com
  • People visit the URL of www. laptopgpsworld. com

If I would use Edgecast’s IP as my A record, what would I point the CDN to pull content from?