Comparing results from specific test runs

When doing a visual comparison of multiple tests, is there a way to tell WebPageTest to use a specific run from each test? Right now, it seems to be picking a random run, which may not necessary be the one closest to the average.

It is currently picking the median run but you can specify the exact run to use by modifying the url. I need to get around to documenting it but the tests to be compared are comma separated and in each section you can specify the run, label and cached/first view information.

For example:,100905_47E1
Compares the industry benchmark AOL and Yahoo portals. They are displayed in the order they are listed (100905_47E0 is the test ID for the AOL test for example).

You can specify the test run to use with a -r:X where X is the test run:,100905_47E1-r:3

To specify repeat view you would use -c:1 (-c:0 is the default and is for first view). The options can be combined:,100905_47E1-r:3-c:0

To specify the label to use you use -l: (make sure to url-encode any spaces or other special characters):,100905_47E1-r:3-c:0-l:Them

(the industry benchmark tests only keep one video so if you change the run number it will not work in these samples but it will work fine for any tests you run manually)



Can you compare videos of scripted test runs?

Yep, they behave exactly like normal tests. The (rather large) caveat is that you can only capture (and compare) a single step of a script so you can’t string together a multi-stage transaction and compare the full sequence.

I have an idea on how to treat the full sequence as a single step that would be easy to implement so I’ll see if I can get it done in the next week or so. There will be 2 seconds between individual step actions but it will be consistent for each step (this is the time it takes to detect that network activity is done and is usually removed from the end of a step but I won’t be able to if I string them together).

Thanks Patrick. Personally I’m not really interested in comparison between sites, but single step comparison between different locations.

Actually, side by side waterfall comparisons from different locations would be great too. Like a sdiff :smiley: