Time To Interactive reliability

In our company we would like to start publicizing the TimeToInteractive metric that we gather from regular test runs on our private instance. The issue I have is that the metric is not generated consistently. Some pages have more success than others but all have some level of missing data. My questions are if there is anything I can do test-wise to increase the odds of getting the TTI number and is there any back end work being done in this area to increase the reliability? I don’t want to start evangelizing this metric only to have to face questions from frustrated page owners.

You can increase the “activity time” that the agents wait for to consider a test “done” (or set a minimum test duration) though it is on a per-test basis (no global setting).

TTI requires a 5 second window of time where the main thread doesn’t block largely after all of the network activity has stopped (technically the window can go back before the end time). Usually tests will stop 2 seconds after the network activity stops so TTI will only report if the main thread was idle for at least 3 seconds before that (more often than not it isn’t).

The risk of increasing the activity time is that periodic pings have a higher liklihood of extending the test and causing a timeout instead. It would work great for pages that go completely dormant after loading but it’s a bit risky.

Thanks for the info. It’s what I had expected, since I noticed that the browser thread would often be blocked until the end of the test. You are talking about using “setActivityTimeout”, correct? Is this available through the api for a non-scripted test?

I believe “time” and pass it milliseconds (one of those undocumented parameters). i.e. &time=10000 for a 10-second activity time.