WebPagetest vs. Keynote TxP/IE9 measurements


We are current users of the Keynote product; running an IE9 Transaction Perspective (TxP) test against our Home Page, and using their “Time to Interactive Page” value for comparison with competitors. The tests are against the backbone, so should be reporting the best possible response time.

What we are observing, in comparing WebPagetest against the Keynote tests at the same bandwidth (backbone), and using the WPT IE9 browser selection, is that while WPT is reporting the Load Time/Time to Interactive Page to be an average of under 2 seconds, the Keynote TxP test is showing an average of over 3 seconds.

What I see in the Keynote waterfall graphs is that there appear to be a number of “delays” in the activity for which I can see no obvious reason. Those delays are not occurring in the WPT tests.

We’ve opened a support case with Keynote to try to determine the reason for the delays, and they are questioning whether WPT is actually using the IE9 browser to perform the tests.

What can you tell me about WPT’s use of IE9, and how you would expect it to compare with Keynote’s TxP tests?

Larry Rosenberg

Yes, WebPagetest uses an actual IE 9 browser to run the test. The architecture is pretty similar to Keynote’s TxP agents and they should be pretty familiar with the WebPagetest agents as well. I know they are aware of them and when I was at AOL we actually provided Keynote some links to the WPT code for fixing issues that we were seeing in their agents.

It’s hard to say for sure without knowing more but my guess is that it is related to the actual machines and how they are deployed. If I remember correctly, Keynote runs their TxP agents in multiple Windows Terminal Server sessions and runs several agents per machine.

The WebPagetest agents are stacked a little differently and run each tester in it’s own VM (with multiple VMs on a physical machine). One key difference is that the WebPagetest machines run on SSD’s to make sure that there is no I/O bottleneck from sharing I/O across multiple agents on a single machine.

Another possibility could be in how the browsers are instrumented where I believe they use the remote COM interfaces into IE while WPT uses code injection and runs directly inside of the browser. Not sure if there’s some overhead introduced there.

Pretty much speculation on my part for what could be causing the gaps based on what I have seen before but yes, WebPagetest uses real browsers.

Btw, to answer the last part of your question, I don’t really know how the two should/would compare if you were running WebPagetest at Native speeds. There are a TON of variables that can come into play with CDN geo-location, routing, etc that could cause differences just based on the networks they run on. If you are seeing gaps though that isn’t because of network or routing.

Thanks Pat; I’ll try to update this thread with what we find out from Keynote. They are currently researching the issue we reported with “gaps” in the waterfall charts, which could have any number of causes.