Reasoning for using median for load time?

Hey,
I would like to understand what the thought process was around choosing Load time as a basis for median calculation as opposed to TTI/LCP or any other metrics that are available?
I ask this question specifically because a median for one metric doesn’t necessarily mean that other metrics for that result would be median as well.

1 Like

Speed Index is actually used for picking the median run. You can alter it by passing medianMetric=xxx for any metric.

The point is to give you a representative test run that is right around the middle in performance. If you want the median of all of the metrics then you’d have to pull that explicitly from the JSON or CSV.

2 Likes

There are 2 parts to this question:

  1. Which metric to use ? Why go for Load time instead of advanced metrics like TTI, LCP etc.

  2. Why use Median or 50th Percentile?

Let me explain a bit on this :

The internet is not the same it was 15 years ago. Back in the day we had pretty static sites and the simplest way to measure performance was to look at the load times. But that has changed drastically in recent years. Each site, page is vivid and likely interactive. Inline with this, the barometer to measure performance has shifted from ‘load times’ towards the end user or how users are perceiving the performance of the website. For example, We’ve all had the experience of seeing a page load and looking at a perfectly rendered button, only to click it and see that nothing happens :huh: Then we go ahead and click it multiple times (i.e., rage click) just to be sure. If it still doesn’t respond, then we say ‘oh you know what, chuck it am gonna go to another site’ aka loss in the revenue of the website.
Hence, it is critical to quantify the experience of a user and this gave birth to a bunch of metrics called perceived performance metrics. Some of the examples are, render start, TTI , TFI, First paint, speed index and so on. It is important that we use the right metric for the right page (Say for a flash game site, the TTI is crucial than load times)

We now understood what to quantify; now lets look at how to quantify i.e., percentiles. This might sound trivial for most of us, but trust me if I had a penny every time I had the ‘median v/s average’ conversation with my clients, I would be a millionaire by now :dodgy:
Quickly soaring through, averages are usually ineffective cos they tend to dissolve the impact of outliers, whereas percentiles keep them separate for us to analyse. Ofcourse the tendency is to have 100% of all the datapoints to be top-notch, but unfortunately our networks are not that mature. So the standard recommendation is to start at median ( 50th percentile ) and later move on to examine higher percentiles ( towards the outliers ).

Apologies for the long answer, but I hope that clarifies :angel:

1 Like

Enjoyed your post @sankalp_91! Just today ordering dinner online (covid closures for all restaurants right now where I am), I was quite sure the restaurant designed their website to dissuade customers from ordering!! :joy:

The page would “load” (paint) in 0.5 seconds (fast), but you couldn’t click anything for at LEAST 3-4 seconds (and every single item you added to your order caused a full page post back and reload and reset the scroll position/viewport). So I’m madly rage clicking non-functional menu buttons trying to order dinner for my family. I swear it took 10 minutes to order 4 items. :man_facepalming:

2 Likes