What do you want to see next?

For a Private Instance setup: ability to turn on firebug, yslow, pagespeed and dynatrace on the FireFox test agent and configure them to automatically send their results to a private showslow server?

We are working on automation of tests through WPT and it would be great if the WPT agents also ran the other tools & report those results to ShowSlow …

Something I’d like to see is a more straight forward config system as I managed to screw it up at least once for every install, perhaps something along these lines…

  • single config file for the agent for both urlblast and wptdriver.
  • JSON config for the server, especially for locations.ini

Thanks Andy. The plan is to eliminate urlblast and have wptdriver support all 3 browsers (was working for a while but I must have broken something and just had to finish getting to feature parity). That way a single agent codebase could support all 3 browsers and the functionality would be identical.

JSON is a good idea for locations.ini. The tree structure has been a pain in the ass to explain and gets very confusing. I should be able to implement a json file with fallback to the ini so it could be a seamless change.

One thing I am considering that may help is a “test server install” page that checks the php version, gd library, filesystem permissions and dumps a tree of the locations from the config and shows the locations where agents have connected. That should catch all of the common configuration errors.

Thanks Pat,

Sounds good…

I’ve currently writing up the process I go through to create ‘all in one’ instances for someone and will post it up when it’s done.

I’ve also got some customer waterfall options that I need to finish (options to control how the URLs are presented e.g. remove domain but keep URL path etc) and will contribute back when I’m done with them

I am trying to make my customers use webpagetest rather than Keynote or Gomez and the biggest pain point they have is that WPT is an instance based test, so I am going to write a cron job that runs the API based tests against my private instances.

I guess I am asking for a Keynote/Gomez alternative that actually runs wpt agents so that I can get all the goodness

Something like this: http://www.wptmonitor.org/ ? I believe there are also some commercial services that use WPT agents under the covers.

It was built on top of the WPT API.

Hello Pat,

I went looking into the wptmonitor, to test it and see what might be best practices to implement kinda like services into our own control panel, but those releases aren’t available anymore? The SVN seems to be unavailable.

Are there any other examples for scheduling WPT tests?

You should install from SVN - that path is correct. The tar files from the other SVN repository are ANCIENT.

There are also code examples for using the API here: http://webpagetest.googlecode.com/svn/trunk/batchtool/ and here: http://webpagetest.googlecode.com/svn/trunk/bulktest/ but they are more for one-time tests and they aren’t packaged apps.

Great work. Some suggestions on the stat’s:

  1. With multiple runs, the main results page states:
    “Performance Results (Median Run)”
    The results appear (?) to be arithmetic means, not median values. Perhaps change the title?

  2. I see that there is an attempt to select one of the (10) test results that represents the “mean”, but how is this determined? On what result (or aggregation) is this result selected? Perhaps provide a doc explaining the method?

  3. If “Visually Complete” is available across all runs, promote that up to this summary…averaged.

  4. For the timings, it would be nice to get further statistics here, though I’m not sure how to determine these, or what type of distributions these all are. For example:

  • mean, median, std. dev, variance, etc…conformant to the expected distribution. For last mile, all things being equal, distributions are relatively ‘normal’ due to the wide range of independent variables. The problem, though, is that 10 trials is probably insufficient to overcome the variance…erg.
  1. For the ‘static values’ (bytes, request #, Dom Elements), it might be nice to get a range.

The results are the values from the run that had the median load time (floor() in the case of an even number of tests). It isn’t the median for each value independently.

It picks the median from the load time (though you can modify the metric used through a query param if you want). It gets a little less “correct” in the case of an even number of runs in which case it picks the run that is on the faster side of the technical median.

If you access it through the XML or json api then you get averages and standard deviations for all of the metrics.

Below the table you can click “plot full results” which will plot out all of the metrics across all of the runs. Not quite the same but it gives you a quick way to check the variability.

Do you have a mechanism to test video? Can WPT start showing metrics like initial buffer time, startup time, rebuffering ratio etc?

It would be nice to see a nice visual chart / graphic of the same elements we see in a waterfall chart, eg connection time, dns lookup time, download time, etc. Maybe as a pie chart? Or is that already somewhere and I have just been missing it?

I wish there is Opera browser included in test website performance.

Agreed. I’ve been trying to set this up myself with Firefox and Firebug enabled, but it’s proving to be a very frustrating experience.

What might prove to be a better solution is to run the result HAR through yslow and pagespeed and post to ShowSlow, instead of allowing the browser to do it. That way multiple browsers are all supported, and WPT controls which results are posted to SS.

Currently, when I do get something to autopost to SS, I get multiple entries for each WPT test that I run, rather than just an entry for the median result. It gets a little messy, since any outliers show up in SS and make it difficult to see trends over time.

It’s not completely clear what you want to achieve - is your goal to just send one WebPageTest result back to ShowSlow when you run multiple tests?

It would be awesome for some kind of annotations to appear so that we can make notes on specific bars before sharing the waterfall with the others
[hr]
Is there any “what-if” module to interact with the waterfall. I am looking for something similar to SPOF module. The use case I have is the following:

(a) Take a given waterfall and form a hypothesis that images are blocking the download and want to test it

(b) If I can have a way to “zero” out the download time of all the image bars and see how much the document complete line would be pulled in?

(c) Alternatively I want to remove the big pink bar and see what happens if I do NOT run any handlers associated with the specific event

You get the drift, right? I want to know what factors are the biggest bang for the buck.If image download/rendering does not affect onLoad then its probably a waste of time to optimize them

So the current behavior of WPT and ShowSlow is correct, IMO. The results displayed in SS reflect the ‘median’ result of all runs executed.

If I have enabled other performance tools in the active browser (YSlow, PageSpeed, and DT AJAX) and enabled upload to ShowSlow, I display each test run, even if it was not the ‘median’ value.

I’m able to work around this by setting up a ‘dev’ profile and encouraging my users to only execute a single test run on the ‘dev’ profile.

It would be awesome if WPT was aware of this additional monitoring and would manage the results from those additional tools, and the upload of those historical results to ShowSlow. I don’t consider that a high priority, though.

Congestion window, chart of number of packets on timeline, TLS related checks (chain length etc.)

Time to Interact (TTI)

[font=Verdana]Perhaps this already exists, but I couldn’t find it. I’d like to be able to specify the response time of a domain, à la the SPOF test, but without complete domain failure. It would allow one to see the effect of a sluggish domain response time on a page’s load. This would have to be an amount of time added to the real response time if the real response time is less than or equal to the desired “slow” time. Not sure what would be the right thing to do if the real time exceeds the “slow” time. ;)[/font]