We use script testing to test a page which needs sign in. The script looks like below. We add “waitForComplete” to make sure the test ends until the page rendering is complete. However, we still see some incomplete tests. Please find the attachment for screenshot.
The major issues are as below. I’m wondering if there’s any log we can check what’s happening during the test. Why are these data missing or incomplete?
No ranking after all tests are complete.
The test ends before the target page is rendering sometimes. So we’ll see the homepage instead of the target page in the screenshot.
logData 0
//navigate to sign in page
navigate https://signin.ebay.com
//login with your credentials
setValue name=userid USERNAME
setValue name=pass PASSWORD
submitForm name=SignInForm
waitForComplete
logData 1
//navigate to testing page
navigate https://xxx.ebay.com/xxx/xxx
waitForComplete
In short, try setting an authentication cookie instead of manually completing a log-in form. This way you can simply navigate to the authenticated page, which should greatly simplify the script and avoid strange behavior like you’re seeing. The script would look something like this:
Thanks for your reply. Actually, waitForComplete does some trick. If I don’t add it, most of the tests will end before navigating to the target page. With it, I can see most of the tests can complete.
BTW, for such issues, do you have some logs we can check? for example, for the ranking data missing, etc.
If you need a manual delay, Sleep is probably a better option. “waitForComplete” waits for an explicit message from js on the page which I’m pretty sure ebay hasn’t implemented.
When debugging, it tends to be easier to use “combineSteps” at the beginning and remove the logData 0/1 commands so you can see the full sequence that is being executed.
It also tends to be FAR easier to use the exec/execAndWait commands for manipulating login forms because you can test and debug it locally with dev tools console first, make sure it does what you expect it to and then use it as part of a test script.
Speaking of local testing, any change that some love for the scripting engine is planned in the future?
It’s being used far more than it appears and it’s became pretty invaluable to measure user journeys for example.
I know you’re a very busy guy and believe if I had the knowledge to contribute to the project I would be already doing PR’s along the lines of local test scripting with debugging or improve the documentation with some good examples of several use case scenarios.
As is, a lot of time is spent looking at empty invalid tests and go by trial and error.
Incidently, WPT is an epic undertaking which me and I’m sure every single person with performance on their radar will be eternally grateful to you for building such an essential and critical tool. So take my comments in stride, and keep up the amazing work you’re doing.