Time To First Byte is longer after upgrading to new server

Hi,

I have a wordpress blog available at this url.

Until yesterday it was hosted on a server with only 2Gb RAM and a poor performance CPU. I’ve migrated the blog to a new server with 8 core Intel CPU and 8 Gb fast RAM.

Guess what ? TTFB was 200 ms on the old server and now it is over 800ms. Here’s the page to the new results : http://www.webpagetest.org/result/140905_20_PRX/

I use Total Cache for wordpress with APC enabled. I use KeepAlive. I even use CloudFlare as a CDN. I get a grade A on YSlow and my blog seems pretty fast to me.

So I just can’t explain the grade F for TTFB. Could anyone give me a clue ?

Thanks :slight_smile:

Is the database on the same server? Same web-stack (apache vs nginx, php config, APC, etc)? Is the database on the same server? Same disk architecture? Were you using cloudflare before as well?

Unfortunately from the outside we can’t usually tell what is going into your TTFB.

If you just added cloudflare then that could be skewing the picture as the connections go to their server first and then back to yours (and a lot gets hidden in the green part).

same web-stack with database on same server as website.

different disk architecture (sdd before, sat3 now)

yes, I was using cloudflare before.

TTFB grade ranges from A to F. I could get a F, run it again and I’d get a A. If I use CURL command :
curl -o /dev/null -w "Connect: %{time_connect} TTFB: %{time_starttransfer} Total time: %{time_total} \n" http://mywebsite

then I get constant result (~200ms).

I won’t bother any longer with webpagetest because I don’t think it’s always accurate and my website is very fast anyway.

Thanks for helping!

By sdd do you mean ssd and is sat3 a SATA 3 HDD (SATA is just an interface)? If so that is a HUGE FREAKING difference, far more than the CPU and RAM (unless your entire app and database fit in RAM). SSD’s are orders of magnitude faster than disks, particularly for random I/O which is the bulk of what web servers and databases do. SSDs can do upwards of 90,000 random operations per second while disks can do somewhere around 100.

yes, sorry lots of typos in my previous post :dodgy:

it was SSD now it’s HDD with SATA3 interface. However, I now have 8Gb RAM vs 2Gb before, and all my website is cached in RAM, so that the HDD should not have any negative impact. In fact I find my website much faster now on the new server (the opposite would have been quite upsetting considering I’m paying twice as much!)

At a guess then it’s time to cache your database in memory too. https://launchpad.net/mysql-tuning-primer/trunk/1.6-r1/+download/tuning-primer.sh is a very good starting point.

In addition, ensure you’ve got a decent cacher installed if it’s a php site.

Finally I’d throw cloudflare away if you’re trying to tune your own site. Their vagaries will make it even harder to fix stuff. GeoIP tells me it’s serving from the US, and you’re testing from Paris, but that’s never certain. Can you tell I’m not a fan?

yes, database is in cache too (in RAM). I use APC for PHP.

I’ve alreday turned CouldFlare off when testing. For now, I think CloudFlare is awesome, especially to prevent DOS attacks.

If the performance itself is fast then don’t worry about the grades specifically. The First Byte time grade doesn’t work well for CDN’s because of how the logic works and how the CDN architecture works.

To pick a target time the algorithm looks at the socket connect time and assumes that is the RTT to the server. It then allows 100ms in addition to that for server processing to get an A and drops by a letter grade for every 100ms after that.

When you connect through a CDN for the base page the socket connect time is the RTT to their edge and is much faster so the estimate will already be off. Additionally they will usually still need to establish a connection to your origin server (and maybe do a DNS lookup) and that all gets done in the request TTFB (green bar) behind the scenes before your server ever sees the request.

There might be some tuning you can do with DNS TTL and the like to make sure CloudFlare doesn’t have to resolve your origin very often (and I’m assuming they are doing it on-demand as needed, they could well be doing something smarter).