First Byte Time

Is there a reason or something I can do to prevent my First Byte Time to always record an “F” … when I retest it comes back an “A” though, but I am curious why the “F” comes up each first run? Here are two tests run within seconds of each other for the same site:

http://www.webpagetest.org/result/110828_XP_1EBZR/
http://www.webpagetest.org/result/110828_D2_1EBZS/

Any issues here or I am just being paranoid :)? Thanks in advance for your insight!

Looks like a case of back-end caching (and no, you’re not being paranoid). The first test looks like it hit the system when the page cache (or database caches) did not have the data cached so it had to build up the cache (hitting disk, etc). The second test benefited from the first test causing the server to cache the work so it came back much faster.

To “really” fix the performance you would likely have to move off of your current hosting provider to one that either offers dedicated hardware or a shared host that is tuned for performance.

If I remember right, you’re on Wordpress. If so, it might be worthwhile to check these guys out: http://wpengine.com/ - They are a shared wordpress host but focused heavily on performance (SSDs for the storage and database, etc).

I actually have dedicated hosting with Wired Tree and I am really happy with them…I run memcache through WP3TC and this first byte score of an “F” never happened until I added cloudflare and started running my DNS through their system instead of putting the DNS directly to my server at wired tree??

I know if I remove cloudflare this will fix the problem, but I really wanted the added security they provide of keeping my site safer…any thoughts on that and also what do you think of cloudflare, or is there something better?

Ah, didn’t realize you were routing through Cloudflare. You should ping them to make them aware of the first byte time problem. It’s probably caches on their side that are getting repopulated but they shouldn’t block the page for as long as they are (at least from the looks of it). I’m generally a big fan of their service and the work they are doing.

Have you checked your Google webmaster tools dashboard to see if it impacted the crawl stats at all (particularly “time spent downloading a page” and “pages crawled per day”)?

I appreciate that advice and I didn’t think about webmaster tools…it seems normal to me I think, but I need to learn more about analyzing webmaster tools better and funny enough I had just bought a book about learning about it, but I haven’t started reading it yet…I will get into it as soon as I finish reading “In the Plex” which is awesome so far.

These are my stats, does anything stand out as bad? http://screencast.com/t/GePoLl42lbcQ

Thank you for such great insight!

The “time spent downloading a page” looks high. The spike in early August is insane but the 1.9 second average time is really high as well. That metric is the time it takes googlebot to download the html for the pages it crawls (just the base page so it’s a good proxy for first byte times).

For reference, the stats for webpagetest (which is somewhat simpler but is mostly the bot crawling the forums which do involve quite a bit of database access) are in the 100ms range.

So how do I control this on my end?..I feel like I have my site pretty well optimized…could it be the 3rd party code that puts the properties on the pages of my site?..that is pretty much out of my control I think??

Is it safe to assume that the 3rd party code makes back-end requests to their service and populates the HTML before sending it down (i.e. it’s not done by javascript in the browser)? If so, then you’re mostly a victim of their performance (though it’s worth talking to them about it).

Something like New Relic ( http://newrelic.com/ ) is really helpful for figuring out exactly where the time is coming from.

If it is 3rd party code that calls out to an external service then there are a couple of things that you can do:

  • If the service makes more than one http request to their servers for each page then putting your server as close to theirs as possible will save you a lot of time (each request probably makes at least 2 round trips to their server so if it is on the other coast they can add up quickly)

  • You can run a reverse proxy (like apache traffic server) on your system in between your app and their service. This will let you use keep-alives to their service over a small pool of connections (potentially cutting the round trips in half) and you can potentially force queries to be cached for a period of time if the back-end data doesn’t change that frequently (even a few minutes will help if it gets visited a lot).

Thanks for your time and I will look at your suggestions and see what I can do on my end.