This brings the distribution up to date with the public version including:
[list]
[]Support added for recording video
[]Screen shot captured at document complete
[]Browser status messages are logged and displayed on the screen shot table
[]Summary metrics split between document complete and fully loaded
[]Location selection UI re-worked for future expansion
[]Allow for explicit setting of browser dimensions
[/list]
Installation instructions are in a readme file in the release.
Looks like I broke IE6 with the latest release of pagetest (which is included in the 1.7 update). If you are using IE6 don’t pull down the update - there will be an updated release in a couple of days.
ok, the release has been updated with pagetest 217 which fixes the IE6 problem. If you just want to download updated binaries you can get it from here: http://www.webpagetest.org/software/217.zip
I have installed 1.7 as per instructions (Linux/Apache for webserver, Win2003 for URLblast server).
When I view test results I can see the waterfall thumbs, but when I click on them and go to the detail page I don’t get the waterfall graphic, just the results table at the top.
Viewing the HTML page source, it looks as though the output is truncated - last line I see is line 175.
I have found that commenting out the following line fixes the problem* (line 179), but I don’t know what the issue with it is.
One of the things that I left out of the 1.7 installer was the GeoIP database. It is supposed to fail gracefully but it’s possible that is causing the problem. I’ll put up instructions on how to install the database in a little bit.
The getRequests() call is needed to build the data table of the requests so without it the data table (below the connection view waterfall) should be missing.
What versions of Apache and PHP are you running on the server? what modules are enabled (maybe just shoot me your httpd.conf to pmeenan@webpagetest.org )? I have a clean 1.7 install running in a VM and all of the pieces look to be working correctly so there might be a config problem somewhere.
If you comment out the error_reporting(0); line at the top of common.inc then php error messages should be re-enabled and that may provide a clue as well.
[quote=“pmeenan, post:7, topic:83”]
If you comment out the error_reporting(0); line at the top of common.inc then php error messages should be re-enabled and that may provide a clue as well.[/quote]
Ah! That led me to the issue:
Fatal error: Allowed memory size of 16777216 bytes exhausted (tried to allocate 43453970 bytes) in /var/www/webpagetest/Net/GeoIP.php on line 396
Since this is a test / internal server, I was able to simply change the value of memory_limit in php.ini from 16M to 48M and now it all works.
Thanks. I’ll see if there are any options on the GeoIP database to make it less memory hungry (I pretty much just plugged it in but it would seem to be a really bad idea to load the whole 40+MB database on every page load).
Yes, the releases of the hosted version are just snapshots of the code that is used to run the public site (usually when there have been enough changes to warrant it and when it is stable). If you feel like living on the bleeding edge the code is live in SVN here: http://code.webpagetest.org/svn/webpagetest
I’ll probably cut a new release just after Christmas.
There’s no formal roadmap. Most of the enhancements/planning is kept in TRAC here: http://dev.webpagetest.org/webpagetest/report/6 but a lot of the development is driven by ad-hoc needs (and whatever I feel like working on at the time ).
You need to make the runtest.php request with the f=xml parameter and then parse the result to get the xmlUrl from the result which will have the details.
There is no way (currently anyway) to get the xml results directly in response to submitting to runtest.
Yeah, that’s what i’m doing since the begining but I would like to have a method do get the xml results as an *.xml file
I tried to parse the *.txt in the folders but with recursive search it takes too much time
I thought about a timer (about 5 minutes) but i don’t have any way to get the results as a file on the hdd
Gatting everything in the same directory will make it much easier to parse
Is there a reason you don’t want to use the http api to get the xml files? It shouldn’t be any harder than crawling the local directories. You also shouldn’t need to do a recursive search (if you want to go the way of scanning for the files) - the test ID can be decoded directly to a file path where the results are stored on disk. You do still need to have the logic in place to wait for the test to complete before scraping the results.
If you want to move to more of a push, you can modify work/workdone.php to do whatever you want when the test is complete (write the cml out to a specific location for example or kick off a script).
I use the http API but I don’t succeed to have the XML on the disk
Ho would you do in the workdone.php to export to a specific directory ?
Actually I would like to have the 2nd xml file when the test is done (the one where you can find all the data)
For the moment I use LogParser to ping the page and start a test and I get the URL that I want.
I collect all of them in a txt file.
Now I would like to parse all the txt file where each line is an URL where I can have my data.
I think that the best way is to download each URL as a xml file in one directory and parse them afer instead of requesting the server each time)
As my tests are done every 4 hour of the day (4 tests per day) I want the script to download each new test as an XML file in the directory …
You can certainly modify workdone.php to run the code that is in xmlResult and spit out an xml file to disk instead of to the browser. It’s just not designed to do that as things stand and it’s expecting that if you’re using the http API to submit tests you will also be using the http api to get the results. All of the information in the xml files is available in the raw csv files (probably even easier to parse) that are already written out to disk with the test results (*_IEWPG.txt and *_IEWTR.txt).
I tried to parse the *.txt files but the problem is that I use LogParser from Microsoft and there’s a problem with the recursive parsing (it takes too many time).
The best way is to add a line in the workdone.php in order to put the *.xml results in one folder, it will be easier to parse
I’ll try to develop a little bit of code to do that.
Edit :
I have done that, the process works locally but when I use it in the workdone.php, nothing works, do you have any idea about where it could be wrong?
I have pasted it at the end of the code after the KeepMedianVideo() Function.