What do you want to see next?

There are several things on my “to do” list for WebPagetest and I figured who better to help decide what is most important than the users. I can’t guarantee that I’ll work on things in the order voted because I’ll have to weigh it against what we need internally as well but it will strongly influence the direction.

Here’s the current list (roughly in the order I was going to work on them):

Compare multiple tests against each other: Initially by selecting multiple tests from the test history but eventually other ways to select the tests to compare. You’ll get a bunch of stacked bar charts for times, requests, bytes, etc. After the basics are done then interleaved waterfalls will also be attempted.

Zoom in on waterfall: At the top of the details page you’ll be able to “zoom” into the render, dom, doc complete or fully loaded times which will only show the requests that had activity during that time and will change the horizontal scale as well. Eventually it will also support an arbitrary min and max (which could also be useful for having a fixed time scale when looking at 2 different waterfalls). This should make it easier for getting useful waterfalls for presentations.

Simpler optimization results: On the main results page, instead of showing a thumbnail of the optimization checklist, display a simple table that indicates which checks did well and which didn’t (probably with a better indication of the “critical” and “less important” checks). The optimization checklist would also be grouped by “critical”, “important” and “nice to have”

Custom Headers and Cookies: The ability to specify custom http headers and cookies to be used for a test.

Commenting on tests: At a minimum, when submitting a test you’ll be able to add a note about what the test is for and it’ll show up in the history as well as when looking at the results. Not sure about letting comments be added after a test was run

Add more optimization checks: Incorporate more of the checks that YSlow and Page Speed do into pagetest.

Improve the documentation: The current wiki on sourceforge is pretty lame (and sourceforge locked it down when they stopped supporting wikis). This would cover documentation for pagetest, webpagetest, urlblast and the optimization checks

I would appreciate a variable download speed motivated by having a target market in a developing country with very slow internet access, yes slower than 56kbps.

Thanks for a wonderful tool though!!!

I would like to see packet loss, please, per request and/or initial connection - though I realise that this may be quite hard as wininet hides this away.

Thanks for what you have implemented though.

FWIW, I do have TCP retransmits over the life of the page load (as well as packet transmits so you can calculate a rate). It’s in the raw page data (link in the top-right of the results page). That will at least tell you if you had any packet loss during the test but not on which connection. It doesn’t detect inbound packet loss but that’s a lot harder to detect, even with raw access to the packets.

On the (very long) list of things to do is to capture the packets from a test which will allow better analysis (and more accurate timings on the network-view of the traffic).

Great (though that .csv is coming through empty for me at the moment.)

I also like the CPU graph very much. I was wondering whether we can have an upstream/downstream bandwidth utilitisation graph?

We’re a long way to go on our site, but I have an idea that if I can get close to max-ing the customer’s bandwidth then I feel we’re getting close to ‘finishing’.

Also, if we are maxing uplink bandwidth then I know I need to look at ripping out extraneous bloat in the requests (cookies and the like). It also might help me deciding parallelisation strategy…

Thanks again

Neil

I have downlink bandwidth. Uplink is a lot harder to capture since you don’t necessarily know when the data actually left (and it’s not necessarily a bottleneck). Hopefully will be adding it in a few days (need to play with how it looks in the chart).

I’ll take a look at the csv’s. I must have broken something (coming through blank for me as well).

Thanks,

-Pat

csv’s should be fixed now. Sorry about that.

-Pat

Hi Pat,

tell you what would really help me: the ability to specify cookies to be sent by urlblast…

All the best

Neil

It’s been on the list and shouldn’t be too hard to implement. I’ll see if I can get a few spare cycles over then next week or so to crank it out.

Firefox support

There can be large differences between the performance of IE and Firefox. JS can have a large speed difference, how the page is rendered etc.

I would love to be able to compare the same page in IE7 / IE8 / FF3 / FF3.5

Regardless great work as always.

Thanks
Steve

Firefox (Chrome and Safari) support are really high on the list of things to get done but it’s a rewrite of a good chunk of pagetest (along with some browser-specific code) so I haven’t started yet. It’ll probably be the next “big” change I work on after I get some plumbing changes done.

Hi there - would be great to be able to choose a test location (e.g something other than Dulles) when running a comparison report.

Also - is it possible to get a test location in Australia? I find latency can play a big part in the sort of results I’m seeing for my site

Thanks

Hey there,

I am not sure if this has been mentioned before but it kind of ties in with your idea of comparing tests from the test history.

The test history page is pretty much a big list of tests everyone has submitted. For users who chose to register, you could have another tab, or even a search filter on the test history page. This would allow users to see only tests they have submitted.

Logged in users should be able to delete their own test results as well if you do this so the information can remain neat and organized.

There is my two cents for tonight :slight_smile:

Sincerely,
Travis Walters

I’ve been planning on doing that but it just hasn’t bubbled up. The user information is stored with each test in the history so filtering for your tests would be trivial to implement (may do it today if I get a few cycles).

Deleting your own tests would be a little bit harder. The history is stored in flat files right now so removing things from them isn’t pretty. If/when I put a database store behind them it will become a lot easier (been trying to avoid a database for simplicity on the stand-alone deployments but I should be able to make it optional).

Thanks,

-Pat

I did the easy part :slight_smile:

The test history page will now default to showing you only your tests (if you are logged in) and there is a checkbox that you can check to see everybody’s (non-private anyway). You should be able to see your own private tests when you look at the history.

Thanks,

-Pat

Additionally it would be handy if the agent would pass in some basic info upon registering.

Like…

CPU:
RAM:
OS:
Browser:

Pat,

I was looking at the video and realized that there is one cool way to improve decision making there and you already have all the information - I’m talking about combining screenshot video with waterfall diagram!

Basically, screenshot videos show rendering progress and waterfalls show download progress and it makes a lot of sense to show them side by side to hint on which downloads actually important and which are not and so on.

How I see the result is simply having screen shots on the top and waterfall on the bottom, waterfall slowly revealing the timeline (you can just crop the waterfall to appropriate time point).

I wish I had more time to mock it up - it should look just great :wink:

I’ve got a pretty good idea what you’re thinking about as it’s something that has been asked before internally, I just haven’t had time to do it because it requires some fairly involved drawing code that would manually draw all of the video frames for the waterfall (and have to figure out how to make the waterfall readable in a video).

Are you thinking the full waterfall, with just the line items revealed and some form of “you are here” indicator or just the “active” requests at any point in time (could end up being quite a few though).

OK, looks like I’m not too original here :wink:

Anyway, I think the main goal is to help people make connection with stuff being downloaded and displayed or not displayed more importantly. I think the best lesson for people might be that stuff might need to be downloaded later because it’s not used yet, although it’s hard to say until I see the results.

Connection is not direct, of course, as stuff takes time to render, there are multiple dependencies and all that, but some conclusions can still be made.

Now, thinking about all this, I think it makes sense to show connection diagram instead of a waterfall - there are much fewer connections (although quite a few anyway) and they stay open for the duration of the page load (once open) and their “continuity” is sort-of “relevant” to video process.

I know, this is hardly specific request, but that’s all I can say without seeing something - it’s hard to say what will work and what will not here.

Using WebPageTest often, I noticed that I miss image previews when hovering over the waterfall items. Similar to how Firebug does it.

Shouldn’t be hard to do, I assume.

Sergey[hr]
Another thing - I remember at the early days of ShowSlow, I embedded a form for starting WebPageSpeed analysis right into ShowSlow details page.

Then your form changed and I gave up maintaining this hack-ish compatibility.

I wonder if there is a way to start test automatically and have some consistent way to have a form on external pages or maybe as alternative, have a link like

http://www.webpagetest.org/test?url=http://www.google.com/

That will just pre-fill URL field.

I’d be happy to integrate ShowSlow with web based AOLPagetest (WebPageTest.org by default) this way.

Sergey[hr]
OK, since I’m on it, I wonder if AOLPageTest can send it’s results as a beacon.

It can be just summary for first and second view as well as grades/scores from Key Optimizations.

In addition to the desktop version if WebBased version could optionally send links to reports to ShowSlow instance

I think ShowSlow users use details page as a dashboard and this way people would be able to get quick links from ShowSlow details pages to Pagetest report pages.