Increased Fully Loaded Times in IE-10/11?

We’ve been running daily tests all week to capture pre/post timings for some changes we implemented on 2/26 in the evening. We moved some calls from client-side to server-side and eliminated 9 HTTP requests.

24-pre: http://www.webpagetest.org/result/140224_70_KRW/4/details/
25-pre: http://www.webpagetest.org/result/140225_KD_RCV/5/details/
26-pre: http://www.webpagetest.org/result/140226_V9_GJF/3/details/
27-post: http://www.webpagetest.org/result/140227_33_H09/9/details/
28-post: http://www.webpagetest.org/result/140228_2Z_GDE/5/details/

Fully Loaded Time improvements were smaller than anticipated and now the end of the waterfalls (from 2/27 forward) don’t align with the Fully Loaded Times. The links above are all from IE-10 tests, but the behavior is the same for IE-11 and Chrome. Some of these timing differences are approaching 2 secs.

Is there something that’s now processing after the last request completes in the waterfall and it’s just not being shown? If so, how can we shed some light on it to improve?

Thanks,
Jeff

The difference I think you are running into is the difference of the document complete (aligns with waterfall) and the fully loaded time.

This is some copy/pasting from what I have found on it with a bit of my own interpertation:
The Document complete time is measured to when the browser fires the onLoad event.

There could be an ellement that needs more time, for instance a flash object which could do things in the background that can’t be seen by the webpage test.

Or that javascripts do things in the background, for example wait for x number of seconds before they fire.

I see a call to 178.85.179.232:47024/NonExistentImage16506.gif which loads several seconds after the page load, it’s called from the AA.js script but that is at this moment on the website, not in the webpagetest result.

That’s interesting because the fp_AA.js script has been making a call to the NonExistentImage gif for a couple months now, only that script used to be called further up the waterfall prior to the changes this week. I presume this week’s changes must have pushed the fp_AA.js script call towards the end of the waterfall and now our Fully Loaded Times are being impacted by that call in the background.

Could this particular background processing negatively impact our Visually Complete timings as well? I noticed they’re up the last couple of days too (see pre/post URLs above).

I’ll take a look, that looks like a bug - the fully loaded time should match the end of the waterfall. Looks like some activity is being tracked that isn’t included in the final results and it shouldn’t be.

For IE or other browsers, will background tasks show up in the waterfalls?

The other thing is that Chrome timings have way more variability and are quite a bit higher that IE or FF. Any thoughts?

26-pre: http://www.webpagetest.org/result/140226_03_GJS/
27-post: http://www.webpagetest.org/result/140227_55_MTV/
05-post: http://www.webpagetest.org/result/140305_WD_D9J/

There shouldn’t be any background tasks - we run Chrome with flags to disable all of it’s background activity (but no, they wouldn’t show up on a SSL site unless you did a tcpdump).

Some additional information and patterns RE: increased FLT’s for several browsers.

For IE-8, IE-9, two calls to NonExistentImage gif get executed and are shown in the waterfall, yet Fully Loaded Times don’t align to waterfall.

IE-8: http://www.webpagetest.org/result/140306_9Y_MS5/3/details/
IE-9: http://www.webpagetest.org/result/140306_NY_MS2/8/details/

For IE-10, IE-11 and Chrome these same calls are made but not shown in the waterfall, yet the Fully Loaded Times appear to include those calls (FLT doesn’t align to the waterfall), even though the calls are not customer impacting. Some of the Fully Loaded Time delays are rather large.

IE-10: http://www.webpagetest.org/result/140306_G7_MSZ/1/details/
IE-11: http://www.webpagetest.org/result/140306_X4_MSS/9/details/
Chrome: http://www.webpagetest.org/result/140306_SX_MSA/4/details/
FireFox: http://www.webpagetest.org/result/140306_G1_P3J/6/details/

Any idea as to why these 2 non-customer impacting gif requests appear to be treated differently between browsers (e.g. shown vs. not shown in waterfalls) by WPT and why the Fully Loaded and Visually Complete times vary from the waterfalls by so much? We do benchmarks using FLT’s and there’s alot of variability between the waterfalls and FLT measures with the various browsers in WPT.

We’re still seeing Fully Loaded Times that don’t align anywhere close to the end of our waterfalls. See links and gaps between FLT’s and end of waterfalls.

IE-8: http://www.webpagetest.org/result/140311_YC_ET5/9/details/ 5.7s (+1.0s)
IE-9: http://www.webpagetest.org/result/140311_4G_ETC/9/details/ 5.1s (+1.0s)
IE-10: http://www.webpagetest.org/result/140311_K3_ETF/6/details/ 5.8s (+1.0s)
IE-11: http://www.webpagetest.org/result/140311_DT_ETJ/3/details/ 6.2s (+1.8s)
Chrome: http://www.webpagetest.org/result/140311_X0_ETM/7/details/ 5.6s (+0.9s)
Firefox: http://www.webpagetest.org/result/140311_G6_ETN/4/details/ 7.6s (+3.5s)
Safari: http://www.webpagetest.org/result/140311_JZ_ETV/8/details/ 5.3s (+0.7s)

These unexplainably high FLT’s are occurring even after implementing a CDN last evening. The site is definitely much snappier after implementing the CDN. So, if we can’t determine why this is happening and begin to reconcile FLT’s with the waterfalls I think we’ll be forced to abandon the WPT metrics and find another tool that provides more accurate load timings.

Comparable measures with other dev tools:

  • With YSlow plug-in on FF, we’re seeing load times in the 4.2sec range
  • With Dev Tools on Chrome, we’re seeing load times in the 4.4sec range

Any assistance in trying to figure this out is appreciated.

IE 10, IE 11, Chrome, Firefox and Safari are all fixed now. Working on the other test agent code that is used for IE < 10 right now.

ok, the IE < 10 agents should also be fixed now: http://www.webpagetest.org/result/140311_YD_RBK/

The end times right now look like they are beyond the waterfall but that’s because of the 2 failed requests for NonExistantImagexxx.gif at the end of the page (one of them referencing localhost).

Yep, we’re aware of those requests. They’re man-in-the-middle related and our security developers assure us that they are not customer impacting from a page load standpoint. What are your thoughts on them? They seem to only appear in the waterfalls of a few of the above browsers so I’m not sure if they’re called all the time.

Also, what was the reason/resolution for the gap between what’s shown on the waterfalls and the FLT’s? I’d like to have an understanding of the gap so I can share that information with my colleagues.

The FLT used to base the end point on the last activity it saw which could include events coming in from the browser after it completed processing requests (the timing of which I don’t control). I changed it to base the reported time on the activity it actually reports rather than what it sees while testing.

As for the MITM checks, your security team is correct that they aren’t necessarily end-user visible so if it’s providing value to them then great. It’s just annoying to look at when you look at waterfalls and see errors :wink:

Tell me about it! Thanks for your help Patrick.