Base Line Time to First Byte

First of all, I think I am addicted to optimizing my sites and thank you for the tool and all the information you have provided. I have been studying Time to First Byte with my Drupal sites and can see how it can be a function of server tuning, mysql tuning or heck , even server software. The chemist in me had to keep going so I built a static html site on my Virtual Private Server to see if there was a “baseline” Time to First Byte. This static site with 1 favicon seems to have a Time to First Byte of:

DNS Lookup: 54 ms
Initial Connection: 89 ms
Time to First Byte: 109 ms

Is this Time to First Byte tunable or is it just the best the server can do?

There is no database or php. What can it be a function of?

Do the speed settings of the test affect this in any way?

Thanks for the response, just trying to get a fundamental understanding.

The initial connection time basically sets the floor for a round trip time (so the first byte time will never be able to be faster than that). The difference between the connection and first byte time is basically the time for the web server to process the request and to improve that you’ll need to tune the web server (or switch) - the rest is network round trip and the only way to speed that up is with a CDN.

That said, 20ms is pretty respectable for the difference, I’m not sure I’d focus on it. You can try Nginx to see if it can respond faster. Also make sure the static files are reasonably fast to access (memory cache or SSDs are great for this). If it’s on a SAN or even magnetic media that can cost a few ms to access.

The latency setting in the test configuration definetly impacts the times. The 50ms default latency for the default connectivity is directly added to the network round trip time. You can eliminate that from the testing but it’s there because that’s a reasonable last-mile RTT for your users.