I use MaxMind IP location software on my server to pinpoint the user’s location.
Their free version is not precise as the paid version but it is still pretty good regardless.
It is pretty neat delivering content to a user based on location.
My server is based in california and I am sure they have an edge location somewhere around there so I doubt I would personally see a great performance benefit.
However, I suppose that would be a way to save bandwidth. If the user is within a certain distance from the server, I could use the original server to serve the content instead of the CDN.
It would be neat figure out the where the distance between the user and original server equals the distance between the user and closest edge server in regards to performance.
If I ever try to figure this out (which I might because I am a major math geek), I might program a function in coldfusion and make a tutorial people can use.
I use MaxMind for the data table under the waterfalls as well and it’s usually pretty good but sometimes it is WAY off (like all Google POP’s claim to be from California because that is where their IP space is assigned). Maybe less of an issue for end-user connections but it’s probably not worth the lookup time.
One of the things I wanted to add to the Linux TCP layer was a way to get the current RTT estimate which might give you a guess as to how close the user is to your server but that would require that the connections are terminated at the server itself and that the information get exposed up the stack.
@ Patrick: Thank you for answering all the questions. I really appreciate it! I will check the forum from now on more often.
@ Travis Our HTTP Referrer Protection allows you to protect your content from other websites to hotlink to your content. When you blacklist a Domain or an IP they will not be able to access your files any more.
If you have any further question please do not hesitate to call 877-MAXCDN1 or mail our support at support@maxcdn.com
I exchange some emails with them and to refill it you must buy 1TB again for $99 … there is no option to pay what you consume for 9,9c / month / .GB
That’s my understanding … even so, it’s a great price for a great service.
[quote=“green-watch.org, post:17, topic:197”]
You have a maximum of 1 year to use the 1,000 GB bandwidth and then you have to refill it. I double checked this Patrick to be sure.
After you make the initial purchase, you have to refill with 10 cents / GB if you are on the pay as you go plan.[/quote]
A. Depends on the CDN but AFAIK, with MaxCDN each edge node is independent and it is cached on first access on each edge node independently.
this is a large drawback, depending on site configuration.
In my case, a large site, 4000+ pages, 90,000 images plus css. js.
I found MaxCDN unworkable due to the fact that I have DEEP visits (only 10% to home page, 90% to over 2000 other pages daily).
The “first visit” hit was enormous.
It would be fantastic if we could get an
Origin Pull → PoP Push = automated PoP Storage across edge servers type of service.
To my knowledge there is only Origin Pull or PoP Push available today.
Pop Push , using Rsync or FTP is too cumbersome for many small customers.
First one to market with this should find willing customers.
(MaxCDN… you listening?)
Even with a Origin Pull → Pop Push you would still be hurting unless you got a lot of visits to the same 2000 pages, right? The first person to hit a given page would still pay the CDN populating penalty so it would only benefit someone in another region that needs the same static resources.
Figuring out how to get PoP Push working would make the first visit fast for everybody.
right you are, “first visit” hit, yes… but in my case I was seeing repeat requests for same resource from New york, chicago, los angeles, melbourne, germany etc for a LARGE number of files. (each node has seperate cache)
(yes, had expires set properly,based on file type, and dns was working well)
Origin Pull → Pop Push would have alleviated about 50% of traffic being pulled, if not more.
although you are correct, on SOME pages, there would be little benefit.
In my case, and I’m sure others, it’s a matter of handling resource speeds, rather than visitor loads.
using FTp or Rsynch, I think is too cumbersome.
but with Origin Pull → Pop Push , I could easily devote a few hours to having a “sitesucker” like software traverse my site, automatically, and preload all images to the Pop Push storage, and I’d be set for a month with all images preloaded and primed
From there, the site would basically take care of itself. (well at least until all resources expire, eh? then it’s preload time again)
to repeat, the problem I see is feeding multiple edge servers for similar resources, yet I prefer the Origin-Pull method.
just call me an optimistic cake-eater who wants a cake too.
Is that true ? Because I don’t see it on the dashboard in the account I ones created. But I do seen it in other places in the account.
I did a quick test and it didn’t seem to download anything from Hong Kong. But I could get the cache inspector to download something from the origin server to the Hong Kong server.
When I first tested MaxCDN, there was no coverage of APAC. Later they contacted me letting me know they were expanding in that area. It requires additional subscription fee as far as I recall, that is likely the reason you don’t see it in your control panel.
signed up to MaxCDN, great company
ran tests overnight with Origin PULL, yup. no go for my site, first time hit is too large. too many files, too many pages, too much overhead.
Then considered PUSH, till I read that there is a possible lag of 24 hours before the files become available. Aware that is worst case scenario… but still stopped me in my tracks.
scenario: 20 posts added to site, these live immediately on my server, but graphics may not be for xx hours?
I go back to wanting a simple Origin PULL → PUSH system.
I guess I’ll have to use simple web hosting service for my graphic files. I think I can setup a synch system fairly easily, but must know the files will be available immediately.
perhaps I should clarify. by Origin Pull → Push I envision the following.
CDN server is the same for both.
Origin Pull for any files NOT on PUSH server (ie, request, don’t have, get from origin otherwise serve file). this would mean the base CDN pull server acts as your storage AND PUSH server, and synchs to edge servers.
In that case, one could prime the PUSH servers with ALL current files, and pulls are ONLY done for new pages/images etc as they are added.
Given that some systems allow for LONG expires (up to a year) this would solve the problem of priming a large site and alleviate the long log on first visits experienced using origin-Pull
I think if you set your maxage/expires long, lets say 1 year, for your images, css, and js, over time you would reach better than 90% cache-to-hit ratio.
Basically most CDNs have a very low receive window and they make a new connection to your server for each request, which makes cache MISS expensive.
If you anticipate a lot of cache MISS, then i suggest using fastly AFAIK they do connection pooling and some protocol level optimizations. A cache miss seemed to not have much overhead compared to going direct to origin…
In past, I had account for about 4-5 months,
For this latest test, just 24 hours each, 12 for a pull testing cycle, and 4 for push, In my case, PUSH has a minimum 2 or more hour lag to LIVE file, so it will NOT work my site. I require LIVE.
I have in both timelines had long expires to 1 month, both on their panel settings, and via http headers from my origin server. In both timelines, I see repeated requests for same file from my server from the cdn pull bot.
I asked specifically if the cdn pull respects all cache headers, and will CACHE and store files according to cache settings.
Answer is YES, it is supposed to… but it was never mentioned how MANY fiies per customer would get stored.
My guess is there is a cache purge used in order to satisfy many customers. I just seem to get purged often, due to style of site and visitor pattern.
I see a cache/hit ratio of about 70% cached on the average.
Since I had run a test by PRELOADING ALL images at least ONCE to cdn… by viewing in Opera, for example… I would have expected 99% if expires and storage settings are respected?
Instead I keep seeing requests for the same images every so often during the day.
Strange thing is. I can feed a direct visitor at lease 4 images per sec off my slow site… yet the cdn bot seems to take a slow time, like about 2 seconds for each image to grab the same for cdn storage. perhaps thinking time between requests? The result is, obviously a real laggy experience for the visitor.
My logs indicated repeated calls for img files… and huge lags to visitors overall, even when the images should have been in cdn cache hours ago.
IF the cache and storage really DID hold files for the specified expires period… I’d be very happy. I could easily set up a pre-loader to prime that cache baby… but it doesn’t stick.
I had been through all these tests months ago when I first used MaxCDN… and could not resolve. I see the same now. Don’t need more than 12 hours in the account to test.
I have gone to simple web hosting now, using remote server simply as image server, while running site from my original office machine. This has turned out much more efficient for me… than a cdn.
YMWV (your mileage WILL vary
cheers.
Vince.
[hr]
hey, thanks, I was just reading that page the other day, while doing testing. I’m on a Mac… so I dont’ need tweaks to initcwnd do I ? (no idea how to do)
But No… if expires are respected, and storage follow expires, and I pre-load images by priming cdn for guests… I would NOT expect a lot of cache MISS. In fact, during testing I would expect 0% cache MISS if primed.
Thanks for the tip on fastly… will keep in mind on next growth spurt here.
Cheers.
Vince, was the 70% cache-to-hit ratio achieved during the 24 test, or it was achieved at the end of the 4-5 month test?
Each site is different, but in my experience I’d expect about 70% in the first day(s), but after a week or two it should start approaching 90% or better.
both… short and long-term.
if only fixed assets are used… I would have expected it to remain well about 95% at all times. (not used for dynamic elements)
why it would EVER drop lower than that, I still wonder.
For me it can only be explained by cache-purge cycles much much shorter than set expires.