First Byte Time “FBT” findings and research results

First and foremost I hope this helps some others, I find it very important to share knowledge that will help others to improve their Web sites on the Internet.

I have been scratching my head trying to figure out why I was receiving an “F” rating on www.webpagetest.org for First Byte Time “FBT”. After some research and additional testing on www.shoeshow.com I have come to the following conclusions:

Reasons why First Byte Time takes so long

  1. Number of requests/responses needed to build the page
  2. Does the page have a lot of Dynamic content on the page (aka how many database calls are going back and forth for data)
  3. Size of the overall page
  4. Size of CPU and MEMORY of the HOSTING SERVER. The more powerful the SERVER the faster processing you will receive.
  5. Do you have a VM Server or a Dedicated Server

My test sample was for the home page of www.shoeshow.com I got an “F” for FBT every time, that is because this page has a lot of dynamic content and unless I dissect and rebuild this page it will continue to receive an “F” although the overall page is loading in under 2 sec. However, If I go to ShowStopper Rewards Program | SHOE SHOW MEGA it gets a “B” because there is very little dynamic or programming on that page and not many images. Finally when I run our https://www.shoeshow.com/404.aspx we get an “A” because there is no images and no coding.

My overall opinion is that FBT is important, but can also be a false/positive.

OK, you’ve identified the problems, but haven’t really addressed how to improve stuff…

There are major two avenues you need to investigate to improve the TTFB performance of your site:

a) Server configuration
b) Code quality

The basic mantra for tuning a server is to keep disk access to a minimum as it’s really, really slow. This means using memory as possible: database caches work well ( because the vast majority of your accesses are read only ), place heavily used files on tmpfs partitions, and so on. Make sure your server-side processing is cached as much as possible - APC for olderr versions of PHP can make a big difference. If your server has plenty of spare memory, then it will be used to buffer disk IO which will also help.

The only real way to identify bottlenecks is to monitor the server in production use: find one, fix it, and there will ( always ) be another to identify. Eventually you’ll get to a place where only a ‘hardware’ upgrade will help. That’s when you’ve won. For now. Don’t forget things will change as database tables fill, the site changes, etc.

Code quality is something that is falling all the time IMO. Not only is code poorly written, leading to poor performance, but in this plugin age, one from Joe Bloggs may cripple one written by Bill Smith, which is really annoying. Best plan is to use a test site, remove all plugins, and profile performance as they are added back in. Who cares if it looks a mess while you’re doing it - nobody is looking!

You normally find that the core CMS is clean and fast - this is the stuff that’s written by people who fully understand the codebase. Plugins on the other hand have far less in terms of quality control!

I put forth much of this same information some time back and was lambasted. Even though I was able to prove it.

I’ve also found and it is especially true with your site you mention, shoeshow dot com, that the TARGET first byte time is just ridiculous and this skews the grade. The TARGET for that site is 87ms. That’s unattainable and nonsensical.

I find this really low TARGET FBT happens on every site that uses a CDN such as cloudflare and Akami, which I see the subject site does use. I believe it is something with the testing that if it detects a CDN it assigns the low target because there is some assumption the CDN improves performance so it tries to grade you on a negative curve. The “false positive” you mention.

If you use a CDN you pretty much have to dismiss poor FBT grades because the TARGET FBT you get is just silly.

And yes, if you have a bloated page (such as this 2.6 megabyte one) you’re going to have a poor performing page. It doesn’t load in 2 seconds - it might for you because of your caching - but in tests, no.

http://www.webpagetest.org/result/150725_PG_4FV/

You can shave 1,529.9 KB (That’s 1.5 megabytes) off this bloated page and greatly improve performance, just by optimizing your jpeg images.

[quote]Compress Images: 32/100

2,255.4 KB total in images, target size = 725.4 KB - potential savings = 1,529.9 KB

FAILED - (547.5 KB, compressed = 117.5 KB - savings of 430.0 KB) - https://www.shoeshow.com/FS/COVER/1462/03_sd_skx_gorun_vortex_0720.jpg
FAILED - (441.5 KB, compressed = 85.3 KB - savings of 356.1 KB) - https://www.shoeshow.com/FS/COVER/1461/02_sd_converse_allstarstatic_0720.jpg
FAILED - (441.8 KB, compressed = 91.7 KB - savings of 350.1 KB) - https://www.shoeshow.com/FS/COVER/1460/01_ss_newbalance_wl574_0720.jpg
FAILED - (321.4 KB, compressed = 94.4 KB - savings of 227.0 KB) - https://www.shoeshow.com/FS/COVER/1467/ss_978x573_showyourstyle_0724.jpg
WARNING - (150.5 KB, compressed = 105.0 KB - savings of 45.5 KB) - https://www.shoeshow.com/FS/COVER/1453/03_lilybloom_handbags_0706.jpg
FAILED - (61.4 KB, compressed = 23.5 KB - savings of 37.9 KB) - https://www.shoeshow.com/framework/images/2015/splash/splash_0715_539x361.jpg
WARNING - (150.8 KB, compressed = 114.1 KB - savings of 36.8 KB) - https://www.shoeshow.com/FS/COVER/1459/62619_ShoeDept_Homepage_978x573.jpg
WARNING - (104.9 KB, compressed = 84.1 KB - savings of 20.7 KB) - https://www.shoeshow.com/FS/COVER/1457/04_frenchtoast_brand_0706.jpg
FAILED - (9.7 KB, compressed = 2.5 KB - savings of 7.2 KB) - https://www.shoeshow.com/FS/Products/133057/4/133057_images_01.jpg
FAILED - (8.3 KB, compressed = 2.2 KB - savings of 6.1 KB) - https://www.shoeshow.com/FS/Products/133026/4/133026_images_01.jpg
FAILED - (6.5 KB, compressed = 1.6 KB - savings of 4.8 KB) - https://www.shoeshow.com/FS/Products/133029/4/133029_images_01.jpg
FAILED - (5.8 KB, compressed = 1.8 KB - savings of 4.0 KB) - https://www.shoeshow.com/FS/Products/175336/4/175336_images_01.jpg
FAILED - (5.3 KB, compressed = 1.7 KB - savings of 3.7 KB) - https://www.shoeshow.com/FS/Products/778319/4/778319_images_01.jpg[/quote]And in this it is demonstrated that CDNs do not improve performance, they do not optimize your site, they merely deliver your un-optimized content and all the bloat, from a location ostensibly nearer to the user than your host server would. It’s not a magic bullet for performance. Cutting the bloat, is.

You should get off Akami’s nameservers then run a series of tests without it, and be shocked at the difference. Then optimize these images and lose the 1.5 megabytes of unnecessary bloat.

Anton any compression software that you recommend. I am on a Windows OS. I have used Shrink-O-Matic. Please also understand I have expressed this issue with our Visual Team and we are in the process of getting them to start running all images through - https://imageoptim.com/ (They are on MACs). Thanks for the feed back.

[quote=“moojjoo, post:4, topic:9482”]
Anton any compression software that you recommend. I am on a Windows OS. I have used Shrink-O-Matic. Please also understand I have expressed this issue with our Visual Team and we are in the process of getting them to start running all images through - https://imageoptim.com/ (They are on MACs). Thanks for the feed back.
[/quote]WPT actually does all the compression for us. Click on “View All Images” HERE the link is directly under the bottom of the water fall. From there you click on “Analyze JPEG” and it gives you three versions of the image.

Anton, you the MAN — I did not know that. And KNOWING IS HALF-THE-BATTLE — G.I. JOE… LOL.

[quote=“Anton_Chigurh, post:5, topic:9482”]

Anton, why would I be getting 0 now? Is it because the images are cached?

[quote=“moojjoo, post:6, topic:9482”]
Anton, you the MAN — I did not know that. And KNOWING IS HALF-THE-BATTLE — G.I. JOE… LOL.

[quote=“moojjoo, post:7, topic:9482”]
Anton, why would I be getting 0 now? Is it because the images are cached?
[/quote]Not sure what you’re asking. I am still seeing the F grade for your images, and still seeing over 2.8 megabytes of pageload.

http://www.webpagetest.org/result/150728_SH_12G5/

I keep stressing that the size of the images makes absolutely no difference to the TTFB. It’s just payload, whereas the TTFB is concerned with the delivery of the html framework that all these images are placed.

In addition, CloudFlare IS NOT A CDN. It is a proxy server, and mixing the two will always lead to complications. I see that you’re not using it BTW. Just commenting on those who don’t see the distinction.

I would add in a rule at webserver level to shortcut the redirection from shoeshow.com to www.shoeshow.com. This is slow when done in software.

Apart from that I can’t comment much, as I know nothing of tuning websites on a Windows server, or what CMS the site is written in. For something like this, I’d use Magento CE…

[quote=“GreenGecko, post:9, topic:9482”]
I keep stressing that the size of the images makes absolutely no difference to the TTFB. It’s just payload, whereas the TTFB is concerned with the delivery of the html framework that all these images are placed.
[/quote]So that, with everything else the same and we have poor FBT, when we test a page in our site with no images and not a lot of “payload” maybe 200kb or so, magically we get greatly improved FBT. Every time.

No one is saying the size of images are related to FBT. Bloat is a separate consideration but it is a much more important one than FBT.[quote]In addition, CloudFlare IS NOT A CDN. [/quote]Tell Patrick that. WPT scores CF as a CDN and it is considered a CDN for the context of the testing. Sites using Cloudflare, when you look in the test results to see what CDNs it has, you see “CDNs used; Cloudflare.” Additionally, when you look up CDN most anywhere, CF is listed:

I think it is okay to call it a CDN if Patrick and the rest of the world does.

Not sure how many times I need to demonstrate that getting rid of excess bloat helps the FBT as well as the rest of the performance measurements. I’ve only done it for dozens of sites.

Psst. I do not have permission to adjust the images, can only stress the need to reduce them :-/

[quote=“Anton_Chigurh, post:8, topic:9482”]

[hr]
Getting strange results — webpagetest.org is reporting https://www.shoeshow.com/FS/COVER/1469/62619_ShoeDept_Homepage_978x573.jpg

FAILED - (260.4 KB, compressed = 114.1 KB - savings of 146.3 KB) - https://www.shoeshow.com/FS/COVER/1469/62619_ShoeDept_Homepage_978x573.jpg

However, when I run in Chrome Developer tools and hit the page - https://www.shoeshow.com/FS/COVER/1469/62619_ShoeDept_Homepage_978x573.jpg and save locally 9 am getting - 129KB vs the reported 260.4KB.

What is up with webpagetest.org?

[quote=“moojjoo, post:11, topic:9482”]
Psst. I do not have permission to adjust the images, can only stress the need to reduce them :-/
[/quote]Makes it really difficult to do any meaningful or efficacious site optimization.

[quote]However, when I run in Chrome Developer tools[/quote]Applying scientific method means we use only one set of test tools and parameters, as a control. I don’t use any testing tool at all other than WebPageTest. Because it is real world connections, real world browsers. And those are what I am interested in satisfying.

Cloudflare is a CDN by any definition. They also do other things but they operate in a pull mode, cache static resources and serve them directly from globally distributed edge nodes.

As far as the target TTFB being unrealistic for CDN’s - yes, it’s unfortunate and I am considering a few options to make it better but when the base page is served through a CDN it throws off a bunch of the calculations. From the client side, in order to estimate the server processing time you need to remove the network round trip time (usually a function of the distance to the server). Unfortunately with a CDN, the round trip time is to the CDN edge and not the origin server (additionally, if the CDN does not maintain a persistent connection to the origin it may be hiding a DNS lookup and socket connect in the “server time” as well). WPT targets 100ms of server processing time when setting the target (which is totally achievable). In the case of connecting to a CDN I may just put in a fixed 250ms estimated server RTT which is enough to allow the origin to be on the other side of the world and still provide a reasonable target.

As to images impacting TTFB, in theory and on a well configured server it should have zero impact since the back-end doesn’t know or care about the images when serving the html (in most cases). If the server is bandwidth constrained, out of clients to handle responses or otherwise not configured well then it’s possible that the image requests for other users is using server resources and slowing down the tested TTFB. If the application logic for the base page actually opens and parses the images before serving the html then that could also cause an impact. In either case the root cause should be addressed rather than optimizing the images to improve TTFB. The images should absolutely still be fixed because that will impact the user experience, your bandwidth, etc - it just shouldn’t be targeted for TTFB optimizations.

[quote=“pmeenan, post:13, topic:9482”]
Cloudflare is a CDN by any definition. They also do other things but they operate in a pull mode, cache static resources and serve them directly from globally distributed edge nodes.

As far as the target TTFB being unrealistic for CDN’s - yes, it’s unfortunate and I am considering a few options to make it better but when the base page is served through a CDN it throws off a bunch of the calculations. From the client side, in order to estimate the server processing time you need to remove the network round trip time (usually a function of the distance to the server). Unfortunately with a CDN, the round trip time is to the CDN edge and not the origin server (additionally, if the CDN does not maintain a persistent connection to the origin it may be hiding a DNS lookup and socket connect in the “server time” as well). WPT targets 100ms of server processing time when setting the target (which is totally achievable). In the case of connecting to a CDN I may just put in a fixed 250ms estimated server RTT which is enough to allow the origin to be on the other side of the world and still provide a reasonable target.[/quote]Thanks for taking the time to consider this, and it sounds like a reasonable fix. “Achievable” though, as regards to 100ms - almost anything is “achievable” depending on what cost and resources you throw at it. IMO 100ms isn’t reasonably achievable for the average, lower skilled type site owner.

[quote]As to images impacting TTFB, in theory and on a well configured server it should have zero impact since the back-end doesn’t know or care about the images when serving the html (in most cases). If the server is bandwidth constrained, out of clients to handle responses or otherwise not configured well then it’s possible that the image requests for other users is using server resources and slowing down the tested TTFB. If the application logic for the base page actually opens and parses the images before serving the html then that could also cause an impact. In either case the root cause should be addressed rather than optimizing the images to improve TTFB. The images should absolutely still be fixed because that will impact the user experience, your bandwidth, etc - it just shouldn’t be targeted for TTFB optimizations.
[/quote]I only know from what I’ve seen over the last few years of fixing performance issues for many people which is, with no changes other than eliminating bloat - which is almost always in the images mainly - TTFB improves. I can only speculate and theorize that at some point during the handshake the server “tells” the browser how much in kb, is coming and for whatever reason if it is a big number, this delays the negotiation. Sounds dumb I admit, but as I said I am at a loss to explain the results and observations otherwise. Just like I was only speculating about CDNs causing a really low target FBT time - which as it turns out, you have verified is true.[quote]it just shouldn’t be targeted for TTFB optimizations.[/quote]And I don’t target it for that reason, have only noticed getting rid of the bloat does have the happy side effect of helping TTFB. I go after the bloat first, because it is among the easiest fixes and best performance boosting things a site owner can do. Not to target TTFB improvement.

All I do for people is try to help them based on what has worked every time it is tried and I try to keep it limited to things I know the average site owner can do. Honestly, I don’t go much beyond that because I know I am not qualified to do so. I can give people the basic stuff, the common sense stuff, stuff that always at least helps - but I pretty much stop there.

OK, I look at a waterfall of a website, and the x axis is time. Are we all agreed on that???

So how does the delivery of a static resource after completion of the delivery of the HTML framework directly affect it’s delivery without the use of a Tardis?

As Patrick says, it is possible to indirectly affect it, either by weird processing or bandwidth constraints ( possibly?? ), but these are predominantly infrastructure resource/configuration/design problems, and need to be addressed as such. Banging on blindly about content won’t make them go away, you need to monitor, analyse and address them.

Whilst is is possible that adaptive network traffic techniques may throttle a large page, the HTML header only contains the size of the page, which only contains pointers to other content. If you’re commenting on these things Anton, you really should have a grasp of the basics…

I still contend that CloudFlare is a primarily a proxy server. It takes over your DNS, and redirects ALL traffic via its network ( which as I’ve also said before still runs on the same old internet as the rest of us at the moment ! ), not just static content. As such, any request for dynamic content will need to be proxied back to the original server before delivery, or served out of date ( which of course can’t happen as Bill Smith will get Fred Blogg’s content </massive oversimplification> ).

Maybe I’m a traditionalist, as I see no point in using CDNs for anything other than static content, to relieve the bandwidth resources on the primary server, leaving it more able to deliver the page HTML directly to the browser. CF is a simple implementation for non-technical users which does far more than just that.

Personally, because I work primarily with eCommerce solutions, I have to work with TTFB optimisations, as every tenth counts. Because of this, I shudder at the use of proxy server solutions, but make heavy use of CDNs. I also take great care in the placement of servers close to the target audience.

( Let’s see you get close to http://www.webpagetest.org/result/150804_RV_1D06/ with CloudFlare - the catalog display page is traditionally the slowest page on a magento site - this is the standard sample data provided by Magento so don’t comment on the image size please! It’s running on a cheap blade server in Sydney, load average: 1.84, 1.41, 1.31 as is the ‘CDN’ - just cookie-free access to the same server. Alternatively my homepage - drupal on the same server - http://www.webpagetest.org/result/150804_JA_1D6K/ ).

[quote=“GreenGecko, post:15, topic:9482”]
( Let’s see you get close to http://www.webpagetest.org/result/150804_RV_1D06/ with CloudFlare
[/quote]Not sure what you’re asking here. None of my own sites nor any of the dozens of sites I have optimized for people, use CF or any other “CDN” anymore, if they did. And they all get straight A grades.

As Patrick confirmed, anyone using a CDN likely won’t be getting a B for FBT due to WPT “grading on a curve” for the target FBT, making every site that is on a CDN, fail this test. He said he would be fixing that and when he does, I see the B grades and maybe even A grades possible for “CDN” users.

Well, this is a Magento demo site. What CMSes have you optimised, and where are the examples? You’re coming over as talking a good fight at the moment.

FWIW, routing dynamic traffic through a CDN can also make sense as long as the CDN is good at it. Akamai calls it DSA but most hagve offerings in that space.

It can reduce the connection set-up time between the users and the edge but that only works well if they maintain long-lived connections back to the origin or otherwise route the requests through their network and egress close to the origin. It can also help with slow start in that configuration.

Some of the CDN’s will do more dynamic edge-serving with something like Edge-Side Includes (ESI) or flush a static initial part of the page to get the browser started while the origin does the heavy work for generating the full page.

Where it REALLY pays off is in the move to HTTP/2. Serving the static assets over the already-established connection for the base page can be a huge win.

So a reduction in the client<>edge latency can be beneficial over the additional proxy overhead to the origin server? Maybe it’s because I’m used to being 200ms from most people that this seems unlikely to benefit my clients but I’m far too old to believe I’m right!

I have seen with my own plays with SPDY on nginx is that it does dramatically change the shape of the waterfall for sure. Still plenty of scope for abuse of decent design fundamentals like delivering far too many files though, but maybe it’ll stop this nasty habit of delivering x from here, y from there, and so on… how are you supposed to guarantee performance like that?

Depends on what the proxy overhead to the origin is. If the edge nodes are essentially in the network path (not taking the traffic significantly out of the way) then at worst it is a no-op (if a new connection is established from the edge back to the origin).

Client <-> edge <-------> origin

It’s not uncommon for it to look more like:

Client <-> client-edge <-------> origin-edge <-> origin

Where the CDN routes the request most of the way through an already warm and established connection and it comes out close to the origin.

The reality is a lot more complicated but it also depends on how efficiently the CDN is operating.

As far as SPDY and nginx goes, I don’t think nginx implemented priorities in the SPDY implementation which was a performance killer. Instead of returning the most important resources first it would just shove them all down the pipe. Hopefully the HTTP/2 implementations behave better. I do know that the H2O proxy “does the right thing” with priorities.