Hi,
I need help. My website has Time to first byte as 10 seconds. Its on joomla 3.
https://www.webpagetest.org/result/170320_RW_13MA/
Not sure what is causing this…bad php, bad sql, server
any lead will be highly appreciated.
Hi,
I need help. My website has Time to first byte as 10 seconds. Its on joomla 3.
https://www.webpagetest.org/result/170320_RW_13MA/
Not sure what is causing this…bad php, bad sql, server
any lead will be highly appreciated.
Have a look on the things you did not score an A on here.
https://gtmetrix.com/reports/topcafirms.in/KsqN0HOE
Hi There,
I can see there are a couple of reasons of it.
if you will complete chart from webpagetest. You will see that there are major images on the front page. That’s increases the DOM size of the page. You need to reduce that.
Another part is you are using some third party cache plugin try to disable that and check that.
Try to use any FPC (full page cache) it will surely decrease the page load time.
BUT the most important part is to debug and profile the code. As Joomla already comes with profiler OR you can use standard debugger like Blackfire or xdebug.
As per our experience most of the time due to bad code issue happens.
Also you can use CDN as well for static content - [url=http://cloudkul.com/blog/role-cdn-decrease-ttfb/]http://cloudkul.com/blog/role-cdn-decrease-ttfb/[/url]
I hope it will help you. Still, have any query please comment. Thanks
TTFB relates primarily to LAMP tuning + network.
Here’s what I look for debugging this type of problem.
Network: Ensure packet loss is near 0%.
Network: Ensure connection isn’t saturated. I use 10Gig connections for sites I host, so this is never a problem. Take a lot of sustained traffic to approach 10Gig connection saturation.
Filesystem: Do i/o testing to ensure raw disks are running well. Put the following in your shell startup file (usually .bashrc or .zshrc) + run ddpseed.
alias ddspeed=‘cd ~ && dd if=/dev/zero bs=1024k of=tstfile count=4096 && rm -f tstfile’
Just ran this on a box I’m still tuning + got 1.6 GB/s throughput, which is sufficient. After tuned, this will likely triple or quadruple.
I use - ab -l -k -t 30 -n 1000000 -c 5 $url - for these tests, which initiates 1,000,000 simultaneous browser visits + stops after 30 seconds.
What you’re looking for is - 0 failures + several 1000s of requests/second throughput.
So for https://foo.com/machine.txt (small file) == 25K-75K+ as this only tests Apache config.
And for https://foo.com/ - I target >3000 reqs/sec for WordPress.
So ab testing - https://foo.com/ - I target 3,000+ reqs/sec for a well crafted theme, like TwentySeventeen or 1,000 reqs/sec for poorly coded themes. Some themes are so poorly coded that no amount of tuning will help + site must be retooled around a performance targeted theme.
Apache: For high traffic sites, say 250K uniques/hour, likely you’ll have to disable logging successfully served image + css + js + font files. This will unload the disk, as logging disk i/o becomes a factor around this level of traffic.
Database: Replace MySQL with MariaDB for a substantial performance boost.
Database: Convert any MyISAM tables to InnoDB, even FTS (Full Text Search) tables, as all recent versions of MariaDB InnoDB support FTS.
PHP + Database: Move /tmp off disk into memory using /etc/fstab + tmpfs which will move all Database temp table creation + PHP session file creation off disk into memory.
High traffic sites with logins, like membership sites, run like greased lightning taking 30 seconds to move /tmp off disk into memory.
https://www.ssllabs.com/ssltest/analyze.html?d=davidfavor.com
You’re looking for A+ score + 100/95/100/100. I allow 95 for Protocol Support (allow TLSv1.0) as this provides Android-4.X OS support, which includes roughly 35%+ of all mobile devices (depending what stats you use).
Currently all these links return 200 + content.
http://TopCAfirms.in
http://www.TopCAfirms.in
https://TopCAfirms.in
https://www.TopCAfirms.in
Google use to allow this. No longer so.
This means your site will be hit with a duplicate content penalty for every page, which means you’ll be over the page threshold for duplicate content penalty to promote from individual page penalties to entire site penalty.
If you’re running 100% paid traffic, no problem.
If you’re looking for SEO juice (high SERPs) then best fix this.
Protocols h2 h2c http/1.1
This is essential if you’d like to qualify for Google’s new push toward real time content indexing.
If you refer to https://www.webpagetest.org/result/170320_RW_13MA, the above steps 1-9 will fix waterfall items #1-#3 + #18 + #24 + #25, which will…
Drop your TTFB to near subsecond (<1 second time).
Implementing step #10-#13 will likely take you to subsecond speed, as HTTP/2 pipelines requests + usually gives substantial speed increase. So HTTP/1.1 + SSL slows sites + HTTP/2 speeds up sites.