Hello the group.
Firstly, thank you for making this resource, and acompanying forum, available.
I am new to performance testing of websites, and evaluating how useful webpagetest.org may be to use going forward. I’ve written a python script to call the WebPageTest API and get back results. Spent a while getting the script to work, and getting results back well enough to allow me to just run the script and show the SpeedIndex.
But the results I see make me wonder how useful WebPageTest could be at all. Running the script 7 times showed me SpeedIndex values of
1909
1935
1860
1675
1707
1318
1388
1388
These were all for the avergage speedindex over 3 runs, and using the value from the first view, with all other parameter just set to defaults. They are all from the same time, and within seconds of each other (time between tests is the time it took to hit and ).
From those numbers it seemed that 3 runs was leaving too much to chance, so I upped the test to average over 9 runs instead. And now the results for speed index were
2144
2031
1846
1328
1751
And then I ran over the daily limit for number of tests.
I’m at a loss as to how to interpret the fact that the fastest Speed Index shows only ~60% of the time for the slowest. That’s a helluva variation for no change to the website.
There’s a general downward trend in the results, but not monotonic. Is this expected?
Is the Speed Index not a reliable stat - should I be relying on TTFB, or some other stat instead?
Is there variation being introduced by leaving “all other parameter just set to defaults” (i.e. unspecified). Do I need to specifify location / browser / etc exactly for each test? And if so, which params need to be set to reduce this apparent variability?
Thanks again for the resource, and in advance for any advice you can offer this newb.