TTFB & Page speed issue

Activated Cloudflare for all domains. But effective CDN is giving X to all my domains if i choose test location Mumbai,India But its working for other asian countries like singapore

advertisingphotography.in

Singapore : WebPageTest - Running web page performance and optimization tests...

Mumbai : WebPageTest - Running web page performance and optimization tests... thesparklingwedding.com

Singapore : WebPageTest - Running web page performance and optimization tests...

Mumbai : WebPageTest - Running web page performance and optimization tests...

I have included test report for two domains but all domains are giving this results CDN properly not working for india and all my sites become slow suddenly

Google page speed checker is showing me reduce server response time but i can tfind out the issue behind this.I am using sitegrounds go greek hosting

For testing server speed i have installed a basic theme and optimised that site to test the server response time but i am getting F in TTFB and google page speed is asking me to reduce server response time.

How can i reduce server response time this a biggest issue for me right now. please help

Detecting Cloudflare in Mumbai is a bug on my sid. The Mumbai agent is running the newer cross-platform agent and it looks like it is not detecting the CDN-specific headers correctly when it is a HTTP/2 connection. I’ll have it fixed shortly.

As far as first byte times go, The actual server response time looks pretty good but if the waterfalls are accurate, the initial DNS lookups are pretty slow. I’d recommend enabling packet captures (tcpdump) which are available in the advanced settings and re-running the tests. Then look at the timings for the DNS lookup in the packet capture and make sure they really are that slow.

Given how much Cloudflare touts their DNS performance I’d ping their support team with the test data and packet captures and have them investigate: https://blog.cloudflare.com/cloudflare-fastest-free-dns-among-fastest-dns/

The CDN check fix should be live in the next hour or so but as I was testing, I saw very different TTFB problems which look like they are from the application code: WebPageTest Test - WebPageTest Details

There’s nothing a CDN can help with that 2.5 seconds, that is going to come down to the application itself and the hosting (mostly the hosting).

Since it looks like the page is wordpress, you can try one of the caching plugins which can help hide the TTFB issues but to actually solve them will probably come down to database performance.

You’re likely misunderstanding the point of CDNs.

As @pmeenan said, there’s nothing your CDN can do to fix your HTML asset speed.

WebPageTest - Running web page performance and optimization tests... shows this @ 2.5 secs.

Steps to fix.

  1. Dump CloudFlare

  2. Dump NGINX (don’t even think about it)

  3. Setup a LAMP stack using embedded PHP, likely a package like libapache2-mod-php7.1 for Ubuntu + Debian.

  4. Tune your LAMP Stack, which include tuning SSL.

CloudFlare fails to set Strict Transport Security. Fixing this allows HTTP2 to start faster, for subsequent page visits.

WebPageTest - Running web page performance and optimization tests... is a client site I host. This is a good example of the speed you’re looking for serving the HTML asset - 158ms vs. your 2.5ms.

CDNs should never cache HTML assets as they can change, so a correctly tooled CDN should never cache your HTML component… so all visitors can see any changes instantly.

This means, only LAMP tuning + also site tooling determine your HTML asset generation + serving speed.

Since you’re running WordPress, the process for tuning sites is well defined… Just find a tuning guide + get started.

BTW, a correct tuning guide will start at the bottom of the LAMP stack, so a correct/useful guide will start with items like…

  1. use ext4 file system + mount options noatime, dioread_nolock.

  2. move /tmp off disk into memory (tmpfs)

  3. verify your PHP Opcache runtime stats (I target 50% free memory keys + memory segments)

  4. periodically run mysqltuner + mysql-tuning-primer + fix whatever they say to fix

  5. enable SAVEQUERIES in your wp-config.php file + check Query Monitor plugin output each time you make a code change + avoid themes/plugins producing excessive database thrash, like the Marketer’s Delight theme (122 SELECTs to generate a page) or ACF (which can generate 100s-1000s of SELECTs if incorrectly used).

So tune your LAMP Stack + tune your WordPress code, then look to external sites you’re referencing which is also adding a huge amount of time to your page renders.

If you must access external sites, then do so via Javascript, after your PageLoad or DocumentReady event has fired (depends on what you’re trying to accomplish).

In general I’d agree with tuning the stack but most of the other initial steps carry big giant “it depends” warnings and a lot of it is going to depend on the hosting (i.e. if it is running on a cheap shared hosting provider then step one is probably going to be to change the hosting).

  1. Cloudflare - disabling it is a good idea while you work on the underlying performance so you aren’t also debugging the CDN interactions but once the app is working well it could be a good idea to turn it back on for serving the static assets from the edge, particularly if you have a global audience or are hosting from somewhere other than where the audience is.

2/3) Dump NGINX? The site may not be on nginx (Cloudflare uses it) but if it was, I wouldn’t necessarily recommend moving off of it. Nginx + php-fpm is easily the fastest config I have seen for serving php and scales WAY better.

Apache with mod_php is stuck running in prefork which means each client connecting has a dedicated apache process for the life of the connection (even if it is idle). It is really easy to run out of processes really quick.

I run a few high volume php sites on Nginx (webpagetest and httparchive.org most notably) with both hitting over 500 requests per second and still serving responses in under 10ms.

Yup

Nope.

Nope. Run in fpm mode for both better resource use and simplification of monitoring

Well, LEMP, but yup. Headers, too…

There’s very little point in tuning your site if you’re proxying it through cloudflare. You’ll not see any change.

There’s something a bit weird about this, as nothing then happens until 0.5s, and the start render time is about 3 seconds. Even though you’re using http2, I’d still cut down on the number of resources: 116 is a bit large… combine css, and only load the fonts you actually use??

Well the most important part of that is caching, which is only peripherally covered by tuning.

Nope. The whole point is to use the disks as little as possible, so any choice in file system will make a trivial change

Nope. PHP doesn’t necessarily use /tmp at all. You need the caching and session resources tobe in tmpfs ( and of course you need enough spare memory to run it… bugger all use if it’s in swap )

Well, more exactly, constantly monitor them so you can proactively modify required resources.

Would help if you understood the information you passed to you… for example if your query cache scaling properly?

Too simplistic. I have plenty of WP ( and even worse, Magento ) servers running in the thousands of qps and delivering acceptable performance. It’s all down to properly caching MySQL.

You’ve missed off a few things from your list:

  • choice of Database: Percona ( Mariadb uses that ), Oracle, ( 5.5 ), 5.6, 5.7, MyISAM, InnoDB, replication?, effect of backup strategy…
  • post processing: pagespeed for example?
  • static zipping of compressible resources
  • use of lazy load and fpc plugins
  • correctly sizing your server ( memory for database, cpu power for PHP )

But right at the top, if you’re seriously looking for performance, you have to monitor your server, and the resources it uses. This allows you to be proactive with your changes, and more importantly to know how your platform is doing.

Only testing will tell the truth.

For example with NGINX vs. libapache2-mod-php vs. FPM.

Test all for speed + balance gains vs. complexity of maintenance.

Last time I tested Apache-2.4.25 + libapache2-mod-php against latest NGINX, Apache throughput was roughly 30% faster in raw reqs/sec.

FPM adds some performance gain + a mass of complexity.

Begin able to install libapache2-mod-php + have PHP work fast… well… Very nice…

If you’ve correctly tuned Opcache, once any .php file is hit the first time + caching begins, performance at high traffic levels becomes close between raw Apache + FPM.

I manage 1000s of sites with 0 staff, so my requirements are speed + simplicity + works every time.

MariaDB was an item I missed. You’ll be truly astounded at the performance increase of replacing MySQL with MariaDB.

I’d say primary WordPress speed revolves around caching.

I just setup a test site today with https://wptest.io data to do some speed testing.

Here some curious numbers.

10 reqs/sec in an LXD container with all LAMP default config files, includes MariaDB with default config.

70.18 reqs/sec - Opcache enabled with 90%+ free keys + 90% free segments, so 99% cache hit rate.

242.39 reqs/sec - WP Super Cache enabled with defaults.

3452.29 reqs/sec - WP Super Cache mod_rewrite enabled + all recommended settings enabled.

Here’s the challenge. Each site’s work load is different, so you have to test everything.

Even WP Super Cache. Never just turn it on + imagine it’s helping. Always test to ensure you have a performance gain.

Where do I start?

  • if you run PHP in FPM mode, then you are able to tune php separately from Apache. Not only does this simplify your configuration ( your web server needs completely different resources to php ), but it allows you to get far more out of your server. If you think this is a more complex solution, then you’re doing it wrong.
  • nginx will always out-perform apache on equivalent hardware when the web server is the ultimate bottleneck. As that bottleneck is usually the raw CPU power needed for PHP in the majority of cases, it’s the simplicity of the configuration that wins the day. Plus, of course, the cr@p often found added to .htaccess by developers doesn’t work… inadvertant win there.
  • there really isn’t much of a performance increase between the Oracle and Percona ( as used by MariaDB ) InnoDB engine. It’s just that by default the Oracle one is set up to use 8MB of memory… or at least was last time I used it. It is faster - up to 50% usually - once all is configured properly. I use it both because of that gain, and to keep away from Larry… win/win.

As with all servers, you do need to tune them to the max before adding any kind of caching - after all, they will be empty at some point and the server has to survive that period to get to it’s ‘normal’ state.

You’ve found an infrastructure that works for you. Personally, I spend all of my spare time trying to improve what I’ve got. That’s why I shifted away from the bloat of apache web server years ago, which required a lot of re-learning after the 15 or so years I’d been using it: right back to the 1.3 days. My current mix has also been able to handle the real pain over the last year or two of managing servers running php 5.3, 5.4, 5.5 and 5.6 concurrently due to CMS constraints, securely and in a performant manner.

The most important tool that I use is munin. Monitoring is king. Without it, the only way you can ‘manage’ thousands of sites with no staff is to massively over-spec, then set them adrift.

As an afterthought, what difference does running the wp supercache on tmpfs make. I find it quite good with nginx… here’s a random one: WebPageTest - Running web page performance and optimization tests... obviously it does matter that there are enough resources to make this worthwhile.

In most cases debating whether or not to use Apache or Nginx because of scalability reasons is like debating if one should use minibus or bus to drive their single kid to school. Most sites don’t hit capacity levels of their webservers, especially when CDN is being used.

When you do it for a living, that’s not really true. Your comments on use of CDNs is only slightly true when using any common CMS - WordPress, Drupal, Silverstripe… the reasons being:

  • a CDN can handle the (possibly heavy) lifting of static resources. They’re really good at that, and can reduce volume traffic on your network. It is unlikely that you’d notice a difference on webserver load unless desperately short of memory of network capacity.
  • a CDN makes no ( maybe trivial ) difference to the speed of generation and delivery of the HTML skeleton that all of these resources fit into, as this requires server side processing, and database access to generate. Only proper sizing and tuning of your server ( and code optimisation ) will directly affect that.

( yes, I know this is an oversimplification but there you go… )

It is unlikely that anyone using a CDN would be in the category of the trivial users that you describe, who would probably be best using shared resources and have little need or use for optimisation or WPT - just installing a FPC plugin ( one that can compress/combine css, js and html as they all shrink by much more than a factor of 5 - as opposed to images which just don’t! ) is probably the best that you can do.

I do it for a living and that’s exactly what I see. I think you mix web server with whatever backend that is responsible for generating dynamic content (PHP, Java, Scala, C# etc.). Those should be considered to be separate services just as database server is considered to be separate service from - for example - PHP. When you measure performance and scalability of web server and you mix those two you’re not really measuring performance and scalability of web server but of part of application stack. In most cases bottleneck is in content generators, not in the web server itself. Hence my comment on comparing performance of Apache and Nginx.

CloudFlare is free, many paid CDN plans are as low as $20/mo. These days most of my (potential) clients already use CDN when they approach me, even if they’re very small e-commerce site.

Pardon me for being picky, I suspect you know this and you oversimplified your comment, but FPC role is to cache dynamic content, not to combine JS/CSS (which these days is an anti-pattern anyway)

My expertise is in open source servers, specifically PHP based CMSes. In this environment, it is almost never necessary to utilise more than a single server to host a website, and database + PHP mesh extremely well, one requiring CPU and the other Memory to perform well. There really is no problem measuring performance of each service - especially when each is kept separate as in nginx, php-fpm, and mysql. Any decent monitoring package will allow you to identify bottlenecks in the internals of each, and obviously the performance gain of grabbing database results directly from memory instead of via a TCP stack can be significant ( especially with websites that may make hundreds of DB queries per page ).

I can’t comment on the internals of windows servers, as I stopped using them a decade ago.

CloudFlare may be free as in $$, but it’s Google’s definition of free. Which means you’re part of the product. Technically the free levels are terrible, they have to proxy the html as they take everything over, which will be adding to the TTFB - you canna change the laws of physics. Most other things can provide a pretty decent service with expiry headers.

Even with http2 there is still an extra overhead in attempting to deliver hundreds of resources. Reducing them ( and even better storing on the server pre-compressed, along with decent expiry headers ) is always a good idea, as long as the compressed files are regularly used across the site. As there are fpc plugins which also offer these services, it’s relevant to trial them and see what works for your site / infrastructure.

Anyway, you’re allowed your opinion, just as I’m allowed mine.

The only way to know for sure the effect of a CDN is…

  1. Tune your LAMP Stack where your site can produce a minimum of 1000 reqs/second native (no CDN). My preference is much higher.

  2. Then add a CDN + see the effect on site speed.

Only testing tells the tale.

[quote=“GreenGecko, post:8, topic:10376”]
The most important tool that I use is munin. Monitoring is king. Without it, the only way you can ‘manage’ thousands of sites with no staff is to massively over-spec, then set them adrift.[/quote]

I am curious to know if there’s a special way you use munin. Do you set it up on every server? I don’t have a team, either.

Nothing particularly special, but it’s quite extended, monitoring the internals of the web server, php, caching, database ( as well as the usual subjects ) to hopefully get a jump on problems… be proactive.

I have a script that configures this, including adding the above extras, providing access to my central server, etc.

[quote=“GreenGecko, post:15, topic:10376”]
Nothing particularly special, but it’s quite extended, monitoring the internals of the web server, php, caching, database ( as well as the usual subjects ) to hopefully get a jump on problems… be proactive. I have a script that configures this, including adding the above extras, providing access to my central server, etc.[/quote]

Thanks for the details. Server monitoring tools are dime a dozen (munin, nagios, icinga, zabbix etc.), so it’s good to know munin does the trick. I am only managing about 20 servers now, but am already feeling stretched.

And, sorry for taking the thread off-topic.