As you can see the TTFB is 82-85ms for static files, this seems quite high to me considering the network latency is only 7ms.
Is there anything I can do to bring this down? My server runs on Ubuntu 14.10 with Apache 2.4.12 (event worker)
Hostnamelookups are turned off and MMAP and sendfile are turned on.
I’ve also enabled the disk cache, but I’m not sure if its working properly.
In the 2nd test you can see that when running a php script the TTFB increases by about 20ms over the 82-85ms. I suppose this only a little increase which is to be expected since the page has to be created dynamically?
Is there anything I can do to optimize the TTFB for dynamic files?
I’m already using FPM/FastCGI (UNIX sockets for the connection) and have the latest PHP 5.6 with the opcache.
The network latency looks to be higher than 7ms. I usually use the socket connect time as a good proxy for the RTT and it looks like that was about the same as the actual request time (splitting the TTFB in half).
The trick I use to speed up the network part is to set up a CDN in front of the website. Besides the fact that it’s caching the static files and offloading all that traffic from your server, they usually have many points of presence, likely some of them closer to your users than your server is, so the users might experience shorter TCP connection setup times than when connecting directly to your server. The edges maintain long living connections to the server, so the requests from the CDN edge to your server will happen through already open connections, with no setup penalty.
This helps a lot with the TCP connection part, and although it slightly impacts the TTFB since you add an extra hop, overall it should feel faster. But don’t take this for granted, you should definitely measure the before/after with real user monitoring tools like NewRelic.
On the application front, you can try some performance optimization inside your application’s architecture, like adding caching for some objects that you need more often, database indexes(if you have a DB), profiling the application, etc, but this depends a lot on your application.
NewRelic is also excellent at this and I can only recommend it.
I’ve found that my static file serving is now very quick after upgrading to Ubuntu 15.04 and running some new tests from a different server using the native connection type.
I’m looking for ways to make PHP faster now because it has a TTFB which is over 10x higher, despite using opcode caching…
Which suite do you recommend for NewRelic?
They have several products but I’m not sure which one to use as a system admin and website manager.
We don’t develop any of the websoftware, so I’m more interested in tuning options for the back end and to find out which third party plugins and such are slow.
That said, it is still somewhat geared towards developers and sysadmins who know how the app works and how to tune it. You may be able to figure out which plugins are causing slow behavior but it doesn’t automatically deconstruct wordpress (you’d have to figure out what the plugins are that are making the calls and it may even need instrumentation added).
CDNs only really effect first time visits, as correctly tuned servers will have expires headers which cache all objects, except dynamic content (usually only text/html) at browsers.
CDNs add complexity + cost for very little real effect, for correctly tuned servers.
CDNs are usually the first thing I strip out of client sites, when they contract me to speed up their sites.
You’re seeing sub-second load time for your site, so doing better than the majority of sites on the net.
After you have an entire CMS Stack running - Apache/PHP/MariaDB/WordPress - then you might find some value going through my Udemy course about fixing WPT scores.
“WordPress Site Analysis Masterclass” is the course + check the free preview of decoding + fixing WPT TTFB.
The big item here is correct caching at the PHP + WordPress level, then for high traffic sites get into MariaDB (MySQL that works) storage engine selection + tuning.
I’m currently watching your tutorial and noticed you set nodiratime and noatime. (I’m not 100% sure, since the video isn’t in HD so it’s a bit of guess work as to what the text says)
noatime already includes nodiratime, as stated on: fstab - ArchWiki
Like gijs007 I’m striving to reduce TTFB except gijs007 is more clued up than me with this optimisation stuff and I got lost after the second post to this thread!
If there is one thing I have learnt recently is that going for the cheapest hosting package with a host even with generally great reviews is not a good idea especially when the site is a key component to its business. I have worked hard to improve the grades in WPT but times for TTFB continue to vary wildly and in practice hitting any of the pages on the website often seem to take an age to load. Hence I have concluded the shared hosting is largely to blame although I’m sure there are still settings which could be played around with to wring out a little more speed beyond using a cache plugin, enabling Gzip, etc.
However, given my rather limited knowledge would it not be easier to upgrade to a VPS hosting package? If so, how would you rank the following hosting setups for positive impact on TTFB and overall speed assuming all other variables such as up time, service, etc being equal:
a VPS plan with a host that are very local to me but use standard discs
a VPS plan with a host that are still in the UK although with their servers perhaps 100 miles from me but use faster SSD drives
a WordPress optimised host (not sure if these are shared/VPS or if disc/SSD or locality still remain important with this option?)
a shared hosting plan with the datacenter in Amsterdam using standard discs, just like how the website is being hosted currently
If it’s worth noting, the audience for the website is only local/regional rather than inter/national.
There is no direct correlation between server type/performance and ttfb. On can make some assumptions of course but there are many factors at play, each of which are specific your unique script, database size and configuration, theme (notsomuch ttfb), dynamism of content, audience location, php configuration, OpCode caching and so on.
That said, rule of thumb:
When picking a server choose RAM over SSD every time. It’s infinitely faster.
Shared hosting can be made very fast with the right page and object caching combines with your CDN setup. It takes some time to tune it, but the results can be very impressive. CDN’s offer manage advantages, not just distribution of locations so even with your local audience, you may find superior performance using a CDN optimally.
Again though, it all depends on the site’s unique setup including amount of traffic and type of traffic.
For some reason, Udemy creates a LoFi version of all videos + defaults to using the LoFi version.
[hr]
I usually set nodiratime also, for self documentation.
You are correct, recent releases of ext4 set noatime,nodiratime when noatime is set.