One thing I want to do in the long run is place all my static assets on a CDN. There are a lot of different plans on the internet. So what I would like to do is open this thread up for discussion.
What is your favorite CDN?
How much does your CDN cost?
How many locations world wide does your CDN company have?
What other information would you like to share about your CDN?
However, I have no personal experience with this company. What do you guys think about this service? Do you know of a more affordable yet effective CDN website?
We use Internap (resold via Softlayer). They have POPs in Atlanta; Boston; Chicago; Dallas; Denver; El Segundo; Houston; Miami; New York; Philadelphia; Phoenix; San Jose; Seattle; Washington, DC; Sydney; Tokyo; Singapore; Hong Kong; Amsterdam; London and costs me $0.20/GB .
Using their “Origin pull” service makes it very convienent for me by removing hassle of syncing CDN folders…
While Internap suits me better compared to AWS cause they have more continents covered, im looking for a provider which also has a POP in India. Akamai is a candidate, but i don’t like them a lot cause they made me fill a very detailed sales request form only to give me a one line message that they don’t entertain sales queries from my country (Thailand)…
The big boys are Akamai and Limelight (most of the really large sites will use one or both).
I’m a pretty big fan of MaxCDN (http://www.maxcdn.com/) and use it for WebPagetest. They also do an “origin pull” which makes it really easy to integrate. Intro pricing is $40 for 1 TB for the first year and then $90 for 1TB/year after that.
OK Pat how much work is really involved with setting up a CDN?
It looks like I have to go in and remap all of the image URLS and any other content I put in the CDN, is that correct?
If you already serve static content via a seperate subdomain/domain then its very very simple to setup “origin pull”, would take like 5 mins to set one up… otherwise it would take you time to remap the static contents to a subdomain…
Thats it… now if you are not happy with the cdn, changing to another CDN or removing the CDN completely is just few clicks(adjusting CNAME records)… no need to mess with your html later…
Yep, that’s all there is to it. If you’re running custom code then it’s in your hands to figure out (I have a globla variable that can be used to specify the cdn path for example that is used as part of the paths for all static assets). If you’re on something like wordpress there are plugins that will do it for you.
If static.example.com happens to ALSO really be served from www.example.com you could always just set that up as a pull zone CDN to pull from www.example.com. All you’ll need to do is change it to a CNAME to their real CDN name they give you (and configure it as a known domain on their side). That way you don’t have to change any of the site code at all to try it out, just wait for the DNS changes to propogate.
I found somebody that actually compared about 24 different CDN services. The results of this test can be found here:
To summarize, the best performing CDN in these tests as CacheFly with SoftLayer/Internap CDN, GoGrid/Edgecast CDN, and Google Appspot not far behind.
The most cost effective CDN according to these tests is Google Aggspot. They claim it is actually free for up to 1.3 Mio requests and 1 gigabyte per day.
Google Appspot(App Engine actually) is not a CDN… its a distributed web application hosting in the cloud thingy… AFAIK they run from 2 datacenters and don’t operate caches at edge locations. The only advantage is that for most of the distance… the packets will flow thru Google backbone…
However, even if Google Aggspot is not considered a true CDN, their backbone seems to be able to keep up with many true CDN providers. I discussed the Google App Engine here:
Nice find, thanks. Looks like I have a few more CDN’s to add to the detection logic.
I wonder how much their cache-busting problem skewed the results but the data is really interesting regardless. I’m trying not to sound like too much of a MaxCDN fanboy but given they don’t have asia pop’s yet it looks like they did really well (#4 for global performance) and the price can’t be beat (at least for small/average sites like this one). I went to look at the cachefly pricing and it’s obviously not targeted at me ($99/month for 256GB as the cheapest plan).
It’s also interesting that Google performed that well even though the data isn’t pushed to the edge. It probably helps a fair bit that the connections are terminated at the edge which shaves some of the time from the requests.
I’m a little surprised Limelight wasn’t part of the test but beggars can’t be choosers - great data.
I see the price for 1,000 GB bandwidth is on sale for $39.95. What happens if I do not use the 1,000 GB in a single month. Does the remainder carry over to the next month and so on?
Once the 1,000 GB bandwidth is used up, can I purchase additional GBs for 3.9 cents / gigabyte or does it raise to 10 cents / gigabyte? The $99 dollars that is crossed out has me a little confused.
I have my own server with over a TB storage. Can I use my own server for storage or am i required to use their servers for storage?
I was looking over the terms of service. I seen a part where it says my overall bandwidth usage can not be less than or equal to 8 Mb/s. Does this mean my bandwidth has to be at least 2494 GB per month in order to use your services? What happens if my bandwidth does not exceed this amount yet? If my math is wrong on the minimum bandwidth usage, please correct me.
“(d) Customer’s overall monthly bandwidth usage is not less than or the equivalent of 8 Mb/s.”
I found on wiki a Mb/s is equal to 125,000 bytes per second so 8 of those would be 1,000,000 bytes per second.
1,000,000 bytes / second * 60 second / 1 min * 60 min / 1 hour * 24 hour / 1 day * 31 day / month * 1 GB / 1,073,741,824 bytes = 2494 GB / month
Is there a way for customers to monitor their bandwidth usage through your service?
Would I need to make any changes code-wise in order to get your services working with my website? Is there a guide for installation so I can see how difficult it would be to get this working?
On another note, I had some questions which I found answers to that other people might want to know.
I’ll answer what I know as a customer but probably best to contact them for authoritative answers. I’ll also see if I can get someone over here to answer on the forums.
The 1TB is good for a year so it carries from month to month until the year is up, then you need to re-up (wasn’t clear to me initially either).
It looks like it’s 10c/GB after you cross the initial bulk purchase. I don’t think you have an option to re-up for another TB at the reduced price.
You can use your own (which is how I recommend doing it because it is easier). Use them in an origin pull (set up pull zones) which basically runs them as a reverse caching proxy for your web site. When an asset is requested from the edge server, if it doesn’t have it in it’s cache it goes back to your server to fetch it (pretty much like the services layered on top of Google’s app engine that you have been playing with).
This one I can’t answer but my usage is WAY under that so I’d be surprised if that was the case.
They have a control panel where you can see bandwidth (aggregate and by POP), balance remaining, the cache hit ratio and a breakdown of the assets that are being served the most. I’m sure there’s more there as well but it’s more than enough to keep an eye on your utilization.
The only code change is to reference the static assets using a fully qualified path pointing to the CDN but since you already have images* set up, it could be a pretty trivial change. The steps are basically:
1 - Set up a pull zone to mirror your origin server (I set up cdn.webpagetest.org to completely mirror www.webpagetest.org but you can point it to a specific path as well).
2 - Set up a CNAME that points to the host name they give you (optional but it look cleaner this way)
3 - Reference your static objects using the CDN path
That’s it. You could even just change you images* CNAME’s to point to their pull zone and do it without any code changes. As soon as the DNS change propagates it would be live (it might be a good idea to use different domains so you can make the code change on the back-end quickly and not have to wait for DNS though).
I guess that is a downside if you do not use it all in the first year. I am assuming if you made the purchase on July 1st you could go to July 1st of the next year and it wont cut you off January 1st. I know Google is crawling the site downloading about 1.5 GB per day so that is 547.5 GB a year just for that. I love their webmaster tools.
I would also be surprised if this is the case. Hopefully they will clarify this in their response. I sent them an email Sunday morning.
It is amazing how much cheaper maxcdn is compared to other services. I just got this qoute from edgecast:
200GB - $350 per month
Advanced Reporting - Included
2GB of Storage – Free
Setup - $250
*overages are 50 cents per GB
I just got off the phone with maxcdn support and here is what I learned.
The 8 Mb/s minimum bandwidth rate is only for their larger clients they enter special contracts with and is not for the pay as you go option.
You have a maximum of 1 year to use the 1,000 GB bandwidth and then you have to refill it. I double checked this Patrick to be sure.
After you make the initial purchase, you have to refill with 10 cents / GB if you are on the pay as you go plan. They said if you have like 5 TB / month bandwidth it is possible to enter into a contract with them at the reduced 3.9 cent / GB bandwidth. That is for much larger clients obviously.
Also, unless the googlebot is downloading your static assets it is unlikely they will be using your CDN bandwidth.
Yeah, the pricing is an order of magnitude cheaper than the competition which is the main reason I’m such a fan (well, and the fact that it works because broken cheap doesn’t count). I’m amazed at what some of the other vendors charge - they’re the only ones with an offering for smaller sites that I have found.
I have a few more questions about CDNs, especially maxcdn:
For these questions, let’s say you have the user’s computer ( A ), the original server ( B ), and the closest edge server ( C ).
If A is closer to B than to C, does C let A know to download content from B instead of C even though it might be cached at C?
If the previously answer is no, then I suppose performance gains can be achieved if B takes this into consideration code wise. B would have to take into consideration A’s location, and the distance between A and B, and the distance between A and C.
Once content is cached at C, does content get distributed to the other edges servers so it is cached everywhere?
A performance suggestion I found on the internet was to have images1.mydomain.com, images2.mydomain.com, etc to allow parallel downloads. With CDNs, would it be better to have all content going through one subdomain even though it may block parallel downloads? I can see caching to be better if there was just one subdomain. Any thoughts about this?
Let’s say B has a huge resource on C. What stops a user from creating a script to keep redownloading the content to use up bandwidth? I could see this costing the company a lot of money. Is there a way to prevent this from happening?
No, if you are referencing the object from the CDN it can ONLY be served by their edge servers.
You probably don’t want to go there. Figuring out where “A” is is a VERY difficult problem. You WILL get it wrong more often than right.
Depends on the CDN but AFAIK, with MaxCDN each edge node is independent and it is cached on first access on each edge node independently.
With MaxCDN, it supports multiple CNAME’s per pull zone and the cache will be shared across them - they explicitly support sharding for that reason. Even without that support, as long as you are consistent about which files are served from which sub-domain then it will be cached regardless.
Nope - you have this same problem with direct hosting if you pay for bandwidth also.