Hi All I hope someone can help me.
I am a tree surgeon and have built my website myself so please go gentle
My website is hosted by vidahost who are very helpful, I had a slow ttftb of 1.4s so have migrated my site to their cloud. However my TTFTB is now almost 2.5s, does anyone have any suggestions? (it was only migrated this morning so maybe I should leave it for a bit?)
I have attached the .csv file, is this helpful?
thanks,
martin
I guess the slow TTFB is related to Wordpress generating the webpage on-the-fly each time someone visits your website. So the speed of your site are primarily dependant on [1] the build quality of the template you are using (or Wordpress in total) and [2] the hardware (processing power to build the webpage) Vidahost is using.
[1] the template loads 23+ CSS and JS files. Some can probably be eliminated. If you do have some technical skills maybe you can edit the template and remove unused CSS and JS files.
Downloading all those files separately also has a performance impact. You can concatenate all those files to one file (I guess WP has some plugins for that) or use the Vidahost Letâs Encrypt SSL Support and go HTTPS + HTTP/2 (if Vidahost supports this) which more efficiently handles multiple files download.
[2] Find yourself a Wordpress Caching plugin so the pages donât have to be generated every time someone visits a page. Your TTFB will be (blazing) fast
http://www.webpagetest.org/result/170504_77_11Z5 - asset #1 is a bit long serving, at nearly 2 secs.
And this is still very fast compared with most client sites I work on.
WebPageTest - Running web page performance and optimization tests... - shows how fast asset #1 on a site I host.
- Removing files will help some + wonât effect time to serve asset #1.
Since first visit is slow to serve asset #1 + subsequent visits are fast, this suggestsâŚ
Your WordPress caching may or may not be working, youâll have to test this + see.
Same with your PHP Opcache. Youâll have to test to ensure itâs working correctly + has enough memory to work in all cases.
- Your hosting provider has Keep Alive turned off, so request they fix this.
If they say no, switch hosting. You can Google why Keep Alive is essential.
- If your WordPress + PHP caching is correct, then next tuning will target Apache + MySQL/MariaDB.
Since your site appears to be a virtual site running with many other sites, this means your site speed + stability (able to serve fast under load) will be effected by other sites on this machine.
If your siteâs generating small profits, just leave it as is.
If your siteâs generating large profits, switch to WordPress optimized hosting.
- Get rid of all http://myzone.96agsbqcsnyrel6wl.maxcdn-edge.com references.
CDNs tend to slow down well tuned sites.
In this case, maxcdn is serving assets very slowly.
Look at time required to serve asset #3 off maxcdn - 500ms (so 1/2 a second).
Look at time required to serve the same file off one of my servers⌠I just copied this file from maxcdn to one of my servers for a quick speed test.
http://www.webpagetest.org/result/170504_RF_12QQ show difference.
So maxcdn == 500ms.
My server == 236ms.
If you run this time reduction across all your assets, you can see moving to Hosting tuned for WordPress sites will make a huge difference.
And keep in mind, when youâre serving static files like the .css file I chose youâre really testing how Linux Filesystem tuning, rather than anything to do with WordPress.
If underlying Linux Filesystem tuning is slow, then all assets will tend to serve slow.
Iâd like to nuance this statement about CDNs.
CDN can certainly help make your site faster. Especially when youâre having visitors from all over the world. The CDN will then serve your assets from an Edge node closer to the user then your hosting.
In your example your self-hosted asset is indeed faster if you test US â US. treeiupâs asset is hosted in the UK â US so you can expect longer TTFB.
If I would test your self-hosted asset from the EU I would also get longer TTFB.
If you have a CDN the assets would then be cached in the EU and served quicker to subsequent users.
But if your users are mainly located in the UK and your hosting is to it wonât make that much of a difference.
Best approach to all site tooling, including CDNs, is to understand what problem a specific technology tends to address.
If you think about CDNs + the entire âEdge Server Propositionâ, all an âEdge Serverâ can possibly do is to reduce the latency of connections.
If youâre using HTTP2, then all assets multiplex over HTTP2, if the HTTP2 config is correct + Keepalive is enabled + working correctly. WPT has a special report card slot just for Keepalive, which is a good indicator Keepalive is working.
This means all you can possibly save is a few milliseconds of time for each of the 5 connections a browser makes (all major browsers currently use 5 threads/connections).
This only applies to first visit.
Subsequent visits, for correctly tooled sites, will only serve the HTML component, as all other assets should be cached from first visit (.css + .js + common images). If there are other assets, then you might have the same few milliseconds saved for each of your other 4 connections.
Considering the headaches CDNs cause (your view + visitors views tend to differ), CDNs tend to be a debugging nightmare.
Especially, when you have high traffic + the CDN slows down or glitches out⌠and they do⌠a lot of the time.
Problem is trying to debug CDN related issues, when conversions drop or zero out for no apparent reason.
Better than using a CDN, better to tune your LAMP stack till your site is blazing fast.
When I take on new clients, one of the first activities I go through is removing all cruft - CDN + Proxy (NGINX, Varnish, Squid, etc.) + load balancers + DOS/DDOS hardware mitigators.
All this can be done far better at the LAMP level. Better meaning, setups are stable + can be debugged by mere mortals which conversions circle the drain.
I understand that HTTP/2 removes allot of RTT latency with multiplexing but while multiple assets can be send and received over the same connection the download time (traveling distance) will still play a (little) roll here I think.
But you are absolutely right that using a CDN isnât solving the whole problem and not having one isnât the biggest problem for slow sites by far. And for smalls, local sites probably not necessary. LAMP, good caching (headers), good coding, etc⌠will gain much more profit.
PS: Iâm not trying to argue just just for the sake of it, just think itâs an interesting discussion