Possible changes to grading

[Edit] The changes discussed below are now live

FYI, I’m contemplating some changes to the grades from WebPagetest and wanted to see how people felt about them:

CDN - Change this from a grade to a yes/no (and probably green/yellow) since CDNs don’t make sense for all sites. The big red F has been hard for people to ignore, even if they serve a small market all geo-located close to their servers.

JS/CSS Combine - Change the grading for this to account for browsers not blocking on JS download and using 6 concurrent connections (usually). I’m considering changing it to only start deducting points after 5 files before start render (to allow the base page to still be downloading) if it is on the same domain or 6 on a separate domain and then deduct a grade for every 6 files (any mix of JS and CSS). IE 6/7 will still suck pretty badly but it’s not nearly as much of a problem for the more modern browsers.

First Byte Time - I’ve seen WAY more back-end problems than I would like (usually CMS systems like wordpress without caching implemented) and those are getting a complete pass currently. I’m considering adding a new grade for the first byte time and use the socket connect time as a baseline (since that is the native RTT). I’m thinking RTT + 100ms = A and then deduct a letter grade for every 100ms after that. Any redirects would pretty much trip this automatically.

Thoughts?

-Pat

I’m with you on CDN and First Byte Time. Can you describe more of what the grading algorithm would look like for JS/CSS combine? Even with browsers that don’t block and support concurrent connections, combining files makes sense, IMO.

+1 for the CDN

JS/CSS Combine. This is not that simple.
a) The current check is for JS/CSS files in the HEAD only, so it fails on those ASP.NET pages with lots of JS files at top of BODY, which equally hurts rendering.
b) Modern browsers do lookahead when bumping into blocking JS files, loaded with script src=file.js. This means the browser will look further into the HTML and start downloading resources, if a connection is available. But. The browser will not continue rendering! So yes, in IE6/7 it’s worse because these dont do the lookahead and scripts block downloading, but even in modern browsers, the rendering is blocked until that JS file has been loaded, parsed and executed.

+1 for First Byte Time.

Actually, the current JS/CSS check is for anything that loads pre-render so even at the top of body should be detected.

Optimally it would be zero JS before render but it may be a bit too soon to be pushing that hard. With the concurrent loading behavior, there should be no difference between 1 and 6, right? That’s basically the change I’m suggestion but instead of treating css and JS as different pools of allowed requests, lump them all together and put a limit on how many you can load pre-start render.

Another option would be to just detect the blocking activity explicitly and flag that but the logic around that might be more difficult. That way conditional styles or inline script would also be caught.

@obiwankimberly

Combining files may make sense (though Bryan had a good example of where separate files would actually be faster because of slow start) but collapsing down to 1 is not as critical as it used to be. Additionally, if you can get better re-use across your site by having 2 or 3 of each then you might actually be able to deliver smaller files and do incremental updates (a site-wide file, a template-specific one and a page-specific one).

I don’t want to just make changes for the sense of changing things though (particularly to the main grades). The grades have always been the “these ALWAYS apply and fix these before you even consider looking at anything else” issues and I think the JS/CSS combining is losing some of that certainty.

Another thing Pat re: the JS/CSS combine.
If the browser is using the 6 connections to fetch CSS and JS files, it cannot fetch other assets. That is another reason to do combining.

  • 1x JS
  • 1x CSS
    those start loading while HTML is coming in
    Browsers have 3 connections left for fetching other assets, probably images.
    Once HTML is in - and JS and CSS stil loading - another connection is available for fetching a resource.

But … you may not always want to combine all CSS and all JS into one file, especially if you are putting that big JS file in the HEAD.

Re: the JS files at top of BODY … I’ve seen the grade not being F often, so probably those JS files were high up in the BODY but bot blocking rendering entirely. From what I remember, nothing really was rendered, so the UX was bad, but yeah … WPT can’t know that.

Now how to create a good rule for this?[hr]
BTW, I totally agree on this:

“The grades have always been the “these ALWAYS apply and fix these before you even consider looking at anything else” issues and I think the JS/CSS combining is losing some of that certainty.”

Maybe it makes more sense to just demote the combine check to be part of the optimization details score card (with warning icons on the resources that should be looked at) and remove it from the main grades (and putting the first byte check in instead)? Particularly given that it’s not all that clear-cut for all sites.

+1 for that, because it is indeed not clear-cut for all sites/pages.

But

-1 for that, because I think in many cases combining will help.
I often see that moving the JS out of the HEAD and/or load in non-blocking way is hard for devs, too little knowledge & experience to figure out how to do that in a way that works well cross-browser, meaning it takes a lot of time to do that … resulting in not doing it. Only good thing left to do: combine.

Hmm … maybe you can indeed take it out of the main grades, put it in the Perf Optim list, and put a few pointers in there (try using a script loader like LABjs to load multiple files in non-blocking way, while preserving exec order and combine with inline scripts, etc etc).

Two things I would like to add Re: JS/CSS:

First, combining these two, when they are up there in the rendering path, is a huge, and sometimes simple, gain. Our former site would probably very well serve as “Worst practice example”, like the “CNN in JS/CSS concatenation”. We had, when this portal moved under my responsibility, something like 30 CSS’es and 15 JS’es in the HEAD. And the impact IS huge. So I still would see it as main grade.

Second, we have different behaviour on at least IE7 and IE8, regarding JS in the body of the basepage included via Script-Tag. I wrote something about my observence in my blog. The point is: I get different grades on this one if I test the page with IE7 compared to when tested with IE8. And that is somehow confusing. (IE8 engine seems to look ahead and pull the download of JS external Scripts in the body ahead, which results in a worse grade)

I am a little bit hesitating, taking it of the main grades (As important as I think this still is), because a valid algorithm for this beast to evaluate from the top of our heads doesn’t seem too obvious yet.

Kind regards,
Markus

P.S.: @Pat: Just watched the Lightning Demo Video of yours on Velocity in front of my Laptop together with Volker Hochstein. Felt like home :slight_smile: And hats off for all the new stuff.

CDN: Much better idea. For Non-US sites, CDN’s are expensive and rare.

JS/CSS: Agree

First Byte: Agree

CDN :heart:

JS Combine :heart:

First Byte : On the fence.

Time to first byte is not always in the control of a webmaster. (it is, but technically it’s for the advanced only, not for your average wordpress blog owner). Giving a grade will cause people to switch hosts or become extreme in their changes, often needlessly.

There is an awesome wordpress plugin called ‘Debug Queries’ that tells you just how sql intensive your site is. Other plugins like w3 total cache can cache database calls and ‘Debug Queries’ will show that your pages go from 8 (up to 50 calls on some blogs) down to 2-3 database calls. Needless to say at 2-3 calls the time to first byte improves and the % of pageload time swings to the code side. I like to see sites spending 97% of their pageload time on the code side and 3% or less on database calls.

Can webpagetest detect database optimization and incorporate it? If so I’d :heart:.

Given W3TC, Supercache and a lot of other options your typical wordpress user has a lot of tools at their disposal, maybe with some questions but even just getting the basics set up can be a huge win :slight_smile: Given that NOTHING can load until the first byte comes back it’s absolutely critical - the key is figuring out where to set the thresholds and still be reasonable. At the end of the day, it doesn’t matter if it’s database or back-end API calls, if it’s slow it’s slow.

Totally agree on first byte time - it’s so critical that if it’s not under your control you should go change that to make sure that it is :-). It would definitely be nice to have a visual indication if you are having TTFB issues.

The CDN change is now live and I’m working on the first byte logic (at which point the JS/CSS will be demoted). Right now I’m thinking that the grading for the first byte will look something like this:

Base time = 3 * rtt to server (allow for DNS + socket connect + http request)
If TTFB <= base + 100ms then it get’s an A
and then deduct a grade for every 100ms beyond that

SSL would also be accounted for and the extra round trips would be factored in for a https request.

That way the connection latency will not be a factor and it will purely be a measurement of back-end processing time (<= 100ms = really good). It will also penalize for redirects.

The end result for the default DSL connection profile would be:

TTFB <= 250ms = A
250ms - 350ms = B
350ms - 450ms = C
450ms - 550ms = D

550ms = F

The actual times will be slightly more generous because these times don’t include the additional latency to get to the actual server which will be included in the actual grading (using the socket connect time).

ok, the changes are live (and retroactive to existing tests). Feedback appreciated.

Boo I have another B to fix now >.<

Which is annoying as my CMS records the time it takes to generate a page, and that records them at 70ms. Now I need to find out where the other 140ms are coming from.

Least theres no more big red F staring me in the face!

@Kye, do you have a test result? I just want to make sure the math on my side is correct since it’s pretty new.

http://www.webpagetest.org/result/110628_5K_Y5JR/ -B

http://www.webpagetest.org/result/110628_JQ_Y6BH/ -D

http://www.webpagetest.org/result/110628_2D_Y6HZ/ -B

Trouble with FBT is it varies a log with each and every request.

And one more thing to add:
See Slides 26-31 and 59 on this deck:
https://docs.google.com/present/view?id=dp9zbfp_52c9g976c4

So I still think, a grade on Concatenation of JS/CSS in the HEAD would be relevant. :slight_smile:

Can you make the doc public for reading (or share the key points on the slides)?

I’m not arguing that it’s not important, just that it’s not universally true that combining resources will be faster. At the end of the day I hope that people pay more attention to the actual waterfalls than the grades themselves.

Whoops… This Link should work. http://bit.ly/hKXBKB
Got it from Souders Twitter Stream.

Basically this guy has tested an artificial test page:
“A single 21 KB HTML document
26 image files totaling 1,789 KB
Five Javascript files totaling 554 KB
Eight style sheets totaling 43 KB
Total page weight of 2,407 KB
In other words, results not typical”
and then has measured the impact of 14 of the most common best practices, each on its own. Each practice was then put on a coordinate system with engineering effort / performance gain being the axis. And Concatenating JS and CSS pretty much got the biggest bang for your bucks. In this scenario. He didn’t write, though, what kind of network he tested this on.

Best wishes,
Markus