Embedded Youtube vids - weird, large Content Download bars

hi Pat,

check out http://www.micazu.nl/vakantiehuizen/rosa-mare-bij-bodrum-6159/.
It’s a page on a travel site with info about a rental house.
Scroll down and you see the 2 embedded Youtube vids. Nothing special, you see this all the time on websites.

I ran the page 3 times through WPT - IE7 from location Amsterdam - , results here: http://www.webpagetest.org/result/100816_fc79e105c45afb81a75bff5f0854e5e3/

See the huge blue bars in the First View waterfall charts? Content download time is high for both YT objects. The response header specifies the Content-Length and that’s about 1 KB. And, Bytes In (downloaded) is 1.5 KB.
Clearly - and this makes sense - this is just the video player, not the actual video. Further down in the waterfall there are two objects served from ytimg.com which are Youtube SWF files. These are ~140 KB and show normal Content Download times.

So, I don’t understand the long blue bars for those first youtube.com objects.
I did some investigation and tried to find out why this is.

  1. Is the Amsterdam location behaving weird?
    Answer: No.
    View Paris (IE7) location test results here: http://www.webpagetest.org/result/100816_301S/
    The Content Download times are very high here too.

  2. Is it an IE7 thing?
    Answer: No.
    View Dulles IE8 results here: http://www.webpagetest.org/result/100816_303J/
    Blue bars are not was long but they are still longer than expected (maybe because Dulles is better served by YT CDN than the poor little servers in Europe :wink: )

  3. Is it a Webpagetest.org thing (versus IE locally)?
    Answer: I think so.

I visited the same page on micazu.nl with my local IE7 on Vista and used Pagetest to see what’s happening.
The waterfall showed no weird long blue bars. They were all normal. The response headers were normal too and similar to those on the wpt.org tests.

Note: in the Amsterdam, Paris and Dulles test result you can see the high CPU utilization at the time of the ‘long blue bars’…[hr]
More info:

I actually did the tests from Amsterdam twice and only posted the link for the 2nd batch in the first post in this thread.
So here’s the link to the 1st batch: http://www.webpagetest.org/result/100816_84238805f0a6beeb9428a0887c63734b/

The first test in the first batch does not show the long blue bars !

If you go down and look at the CPU utilization, it is PEGGED. Since pagetest measures things from the browser’s perspective what is probably happening is that code is executing on the main UI thread (javascript or flash) and just happens to be executing at the same time that that request is being made. It probably gets served really quickly but the browser can’t process the response until it is done with the code execution.

We see this fairly often when people attach a bunch of jQuery code to $(document).ready(), particularly code that beats the crap out of the browser with class-based selectors ( $(".myClass") ) which are horribly painful on IE7.

If you run the page in Dynatrace Ajax Edition you should be able to find the offending code pretty easily. 10:1 if you block jquery.min.js from loading you will also see the problem go away (of course the page will also be broken).


txs Pat,

I’ll run it through dynaTrace Ajax.
Weill post my findings here.

Well, it seems this part of jquery (1.3.2) is causing the pain:


          setTimeout(arguments.callee, 0);

The code is for detecting DOMContentLoaded in IE browser. The setTimeout is called many times which I can understand, because 1) the page is huge (200 KB, 2000+ DOM elements) and jQuery is called in the HEAD.

Actually, that is one problem I’ve had with Dynatrace. I’m betting the actual time is in executing the code that is attached to DOMContentLoaded but finding the code it a pain because if the long chain of timeouts you have to follow. If you go to the code hotspots it should be easier to find (and the new 2.0 release made it easier from what I understand). Make sure to also look at the JS time which will be the actual execution time and not the wall time (which will get inflated with the setTimeout calls).

I had exactly this problem with a page 3-4 days ago and it took a little bit to track down the code because of the call chain.

One other way to verify this is to block the site code but still let jQuery load (which will still go through the settimeout loop).

With a really complex page the selectors become even more of a problem which I can all but guarantee is the problem.

It is still pretty rough but pagetest has some code to detect inefficient jQuery selectors and should give you an idea on where to start: http://www.webpagetest.org/result/100816_fc79e105c45afb81a75bff5f0854e5e3/1/performance_optimization/#jquery_selectors

If any of those are inside of a loop then they are even worse.

OK Pat, help me some more finding the dubious code.

I opened the page in IE7 again (empty cache of course) and logged all in dynaTrace 1.6.

Again, the Timeline shows a long blue bar (~12 seconds) for JavaScript. On mouse over a tooltip shows "readystatechange event on . Start: 11.20 s, Duration: 12520.59 ms.

Using PurePaths to digg into the JS, I see this:

  1. when jQuery itself is executing, the chain of setTimeouts in that function in jQuery is only 45 ms in JS exec time. So your guess is right: it’s not jQuery execution itself that’s the problem.

  2. I looked through all the JS code that is executed on the page, searching for $(document).ready(function() stuff. This is a lot!

I put all this code in a .js file which you can download here: http://www.aaronpeters.nl/sandbox/klanten/micazu/problematic-documentready-code.js

I see many class selectors :frowning:

My guess: lots of JS code executing at domready + that code has class selectors + very big DOM tree == slooooow rendering

Pat > if you please could quickly glance over my .js file and post here what strikes you, I’d appreciate it!!

lol, I was actually looking through it this morning (out at the beach with the family this week so I’m a little less responsive than normal - but have a lull in activity right now).

I tried the new Dynatrace beta and it doesn’t make finding the offending code any easier - I’ll see if I can reach out to them and give them this use case to see if they can take a look at it - it’s a common one I have to run and can be quite a pain to nail down.

Looking at the code, it looks like there are two main document.ready handlers that are probably to blame. The one in basicFunctions.js that attaches click handlers to all items with a ToggleHeaderMenu class and the one in MapResultControlScripts.js that uses a fair number of class selectors.

The code in MapResultControlScripts.js would be trivial to make faster - even without having to restructure anything, just cache the result of $(".MapViewLegendaContainer") in a variable instead of re-running the query several times. Even better would be to change that to an ID instead of a class selector if there’s only one on the page.

That mostly just takes care of the $(document).ready() handler code though - looks like their click event handlers and other code makes liberal use of class selectors as well so the app itself will feel slow, even after it loads until those get fixed.

And yes, your assumption is correct. IE (particularly 7) + class selectors + complex page = really slow performance. IE 7 doesn’t have a fast path for selecting elements by class so it actually needs to walk the entire DOM (in javascript) and check every element to see if it has the class that is being searched for. A widget may perform fine on a simple test page and then performance goes into the toilet when you put it on a complex page. IE 8 is a bit faster but it’s largely because of a faster js engine, I don’t think IE gets a native class-based selector natively until IE 9 (though I could be wrong on that point, just know for sure that IE 7 doesn’t).

Using some basic elimination, it looks like blocking basicFunctions.js helps the most but the CPU still spikes for quite a while so it’s not all from there. Still looking but it may require a better understanding of the page to really dissect what is broken.

As a side note, they also look to be pulling jquery from the SSL version of Google’s API’s. at least for this page they would be a lot better off pulling the regular http version.

Sweet. I’m sure Andreas would like to dive into this one.

Yes, the number of class selectors is high there. I noticed that too.

Spot on. In line with what I know about jQuery perf optimization.
I also read about giving a context (e.g. a specific ID) so there is no need traverse the entire DOM.

Which is true. Interacting with the map is sluggish (e.g. drag)

Yep, already on my to do list.

If it helps any, here’s an article that Dave Artz pulled together last year on jQuery tips (particularly selectors): http://www.artzstudio.com/2009/04/jquery-performance-rules/

You beat me to it! Let me know if you have any q’s. #5 can be particularly helpful in optimizing class selector queries. Also using “delegate” and “live” can help with event bindings if that’s where you are expending efforts selecting things.

hi Dave!

txs for the reply. I’m not too experienced in optimizing jQuery code for performance.
I’m probably asking for too much, but could you take a brief look at the actual JS code (http://www.aaronpeters.nl/sandbox/klanten/micazu/problematic-documentready-code.js) and feedback here on what you would probably do?

(Btw, when is your next blog post coming? Always love reading 'em)