Is it possible to run ENTIRE site (not just images, etc) via CDN?

I know some CDN providers offer expensive Dynamic Site Acceleration products that serve entire sites, not just static files. But I wonder if a CDN could be configured so even the base html file is served via CDN rather than from the origin server. A DNS lookup time would be saved, and potentially also the connect & download times could be faster, especially in geographical locations far from the origin server.

Would it not be as simple as specifying my dedicated IP as my origin server, and then setting up www cname (instead of the typical cdn cname) as an alias to my CDN account?

I don’t know much about these things, and that’s why I’m most likely overlooking some obvious reason why this wouldn’t work. But I’ve always meant to ask…

It depends on the cacheablility of your HTML. If you generate unique content for every user (or even detect logged-in state by cookie) then you can’t but if the content is the same to all users, even for short periods then absolutely (and just like you planned).

You would need to add caching headers to your html responses for it to be useful but even short lifetimes (1-5 minutes) can have a big impact for a busy site.

I’ve been talking with Akamai recently and they have a service that is different from whole site delivery. From my understanding, you do some DNS manipulation that causes a users request for a page on your site to be directed to akamai, they then grab the data from your site (intial page code for example), and serve it through an optimized channel on the akamai network. That may be something worth checking into. Though, it does not do caching of the page code, just optimizes the delivery path.

p.s. On a related note, you can expect a comparison between MaxCDN and Akamai very soon. We’re working on integrating akamai with our site, and when we do it should be pretty easy to switch back and forth between the two for testing. Should be interesting to see how they compare.

Sounds like DSA. Joshua Bixby (from Strangeloop) had a pretty good writeup on it here: http://www.webperformancetoday.com/2010/09/02/dynamic-and-whole-site-acceleration/

The main “benefit” is that they run the traffic back over tweaked network connections (that have been warmed up) and the front-ends generally have large initial TCP congestion windows (much like Linux 2.6.39). It’s usually pretty freakishly expensive for what it is actually doing but for situations where the content is dynamic there really isn’t much that you can do.

If you don’t mind building vendor-specific functionality they also have ESI support (Edge-side includes) where you can basically cache a “template” at the edge and only the dynamic pieces are fetched at load time (varnish also provides a similar capability).

Jarrod, unless there is a very inexpensive reseller of the Akamai service, I can only envy… :slight_smile:

Pat, my site does serve different content to logged in users, so the caching of the html would not likely be possible, but if everything else would work, I think (I hope) it could still improve the site’s performance. Perhaps this is something I should test one of these days.

Pat, don’t worry, i was only mentioning it so as to make others aware of it’s existence. Personally, it is not something I see as too useful for our business. Akamai initially tried to sell it to me for a “mere” $3,000 a month (aka $36,000 a year). We serve customers worldwide, but the benefit isn’t worth the price. For now we’re just sticking with their object delivery.
Marvin, don’t envy us yet! I had them put a clause in the contract so that we can cancel the service within 90 days without issue. I plan on using that time to do plenty of testing to see if it really is worth the higher price. So, the envy will have to wait until the testing :wink:

If the content can’t be cached then you might benefit a bit by avoiding the additional DNS lookups but there will also be a penalty for routing the base page through their edge servers (so the first byte times will be worse).

@jarrod, when you do the eval, make sure to use real user performance data (analytics) and not a backbone synthetic test (Keynote, Gomez, etc).

I’m sure you’re probably aware, but the CDN providers are known to co-locate their edge nodes in the same facilities as the test agents for the testing services so things will look artificially fast unless you are using a last-mile product to do the testing (or using real data).

Yeah, I was going to utilize various webpagetest.org locations, as well as on site analytics. The main figure i’ll watch is the time to first byte as well as watch for consistency at different locations. Our current cdn is a bit inconsistent actually, one of the things that prompted us to look elsewhere.

The change has been finally implemented. Now the entire www sub-domain, not just static files, is on my CDN. I do see a small, but a measurable improvement in both, the start render, and load, even in geographically close locations. In some far away locations, where my DNS look up times were pathetically slow, the improvement seems to be nothing short of dramatic. For example, in Australia, the best load times for my home page used to be around 1.4 to 1.5 seconds. Now I did 10 tests, and they all came at around 875 milliseconds.

Cool. I had forgotten about the impact of DNS though your original DNS must have been pretty bad (with no caching). CDN’s do usually have a pretty good DNS infrastructure so it makes sense.

I’ve just completed more tests. In locations close to the origin server (east cost), the improvement of load time seems to average 50 - 100 ms. In the rest of the world (west coast included), it’s about 100 - 200 ms, more in some cases.

I’m quite happy! :slight_smile:

Pat, I’m actually using Dnsmadeeasy, and I’m not sure why occasionally the DNS look up times have been so slow.

How did you do:
http://www.laptopgpsworld.com/images/uploads/sprite-combined-20100922.png

Was that something that you manually did or something automatic that’s been done? Our site is an ecommerce site and every page has 50 product thumbnails by default (can go up to 500 if the user chooses to). I’d like to do spriting like you have there, except for we have a dedicated in-house photography studio that does hundreds of new photos a day. So such a solution would have to be automatic, not just because of the amount of new images, but the scope of our catalog as well (over 20,000 sku’s). If your solution isn’t automatic, have any ideas?

p.s. Congratulations, i’m quite impressed with the speed of your site.

p.s.2. Would like to clarify that we currently do use spriting, just not on the product thumbnail level. Since the majority of our http requests are from those though, that is probably the next best place for me to focus my attention.

Thanks, Jarrod. Unfortunately it was done manually. It’s a good enough solution for my site, but obviously not a solution for your site. My site talks about a limited number of software products, so the pictures are in most cases good for next year, too. Just the version number in the description changes, plus the review is updated.

@jarrod, you might be a lot better off using data URI’s for the browsers that support it (chrome, safari, IE8+, Firefox) instead of trying to sprite the images independently. You can inject them directly from javascript and have a service that returns a dynamic set of them (so it is cacheable) as gzipped json which makes it easy to make it cacheable and combine several together.

That sounds good, any tutorials or information on doing that? Thanks!

Not that I’m aware of. Looks like Google Image Search uses a similar technique. Regardless of how you go about it, there is going to have to be some trade-offs for cacheability (when you combine multiple files together then it’s that specific combination that you are caching). For category pages this shouldn’t be a problem but there’s probably a long tail of images for search.

Hand-waving the caching issue for now, the flow would basically look like this:

  • detect that the browser can support data URI’s (on the server probably)
  • instead of src tags on the images, put an id on the img tag (or a div where the image will be placed inside of)
  • below all of the images on the dom, include javascript that walks all of the tags and makes requests to the back-end image/json service for the images in groups (say 6-10 per request)
  • when the json response for each comes back, dynamically insert the src tag on the image elements with the data URI for the image

For images that you don’t expect to be cached you can actually inject the data URI directly into the response (either inline in the img tag or in javascript at the end of the page).

Thanks for the help in this question how I can thank you?

super

A few hours ago, I took this CDN thing one step further. I decided to let the CDN cache even html for guests (my members are still able to see un-cached version of the forum.)

Now the load time seems to be more consistent across the globe:

Now the “worst” performing thing is the DNS. Sometimes the first test shows the DNS time multiple times longer than the subsequent tests. But there’s very little I can do about it short of looking for a different CDN provider. My CDN provider keeps the TTL time very short, I don’t remember now exactly what it is, but if I recall right, the last time I checked it was like 20 minutes or something. I’m not sure if that’s typical of CDNs to keep it this short.