Massive Testing Differences Between and Other Tests

Hi everyone,

I have always used GTMetrix, Pingdom, and to a lesser extent Google Page Speed Insights to assist with site/Web optimization. A friend had mentioned in passing, so I decided to check it out. I’m mystified about the results here and just how massive the difference is, especially regarding load time:



Our parent company/organization is a world-wide, web-based consortium, so this site is being real-world tested for things like load time literally on every continent with numerous browsers, different operating systems, etc. We have never had this site load slower than 4.8 seconds anywhere in the world, and haven’t been able to replicate that single instance since.

Anyway, the results on have me nervous, lol. Can anyone help interpret these profound differences?

Many Thanks.

The difference I see is in the content download, in the webpagetest the is taking some time to download and some files are downloaded after that.

The file is about 1Mb and the webpagetest is a chrome browser simulating on a cable internet connection. Thus it has more latency and lesser bandwidth. 5362Ms over 1073KB = 200 KB/sec = 1,6 Mbps. The cable according to specs is 5Mbit down / 1Mbit up with 28Ms latency. It was a bit on the slow side, I dont know if the webpagetest site extracts existing latency to a website or it just adds the 28Ms.

The GTmetrix does the 1MB = 1073KB in 731ms which would be 1,4MB/sec = 11Mbit/sec.

The pingdom would be like an 24Mbit connection.

I would test with another webpagetest location or at a different time of day.

Be warry that when sites that test your website speed show nice results this does does not mean your site is fast, it depends on how the test are performed. Look in your website logs on where your users from (mostely) and what the average times are.

The ultimate truth is going to be your real-user data. If you’re using Google Analytics then page load times are automatically collected and reported. If not there are also several other RUM solutions that can report the timings.

Bas-tester pretty much nailed it. The usual difference with WebPagetest and most of the normal backbone monitoring is that WebPagetest offers the ability to limit the connection bandwidth and add last-mile latency to better match end-user configurations (if you select “Native” for the connection type then the limits are removed).

At close to 4MB, that page is on the huge side of things (the average from the top 300k sites is 1.4MB and even that is painful) and it looks like it’s largely because of 3-4 images that need to be compressed better:

If you look at the bandwidth line at the bottom of the waterfall you can see that it’s pretty much pegged:

You should get a linear reduction in overall load time as you reduce the size of those images.

Thanks a lot, guys. I do appreciate your time,

Made some improvements:

Still not able to replicate most of the latency in the real-world, though.

One thing I’m having a hard time getting my head around is that ‘First Byte Time’ and ‘Cache Static Content’ grades vary wildly, everywhere from F to A, even when re-running the exact same test… what’s that about?

Also, what is holding up the latter lines in the waterfall?

Thanks again,

First byte time is largely going to be driven by your server response time for the base page. I recommend running 9 runs in a test and looking to see how variable it is (and try a couple of test locations). Depending on your hosting and the app itself it’s not unusual to see wide variation and may be something to keep an eye on.

Cache static content shouldn’t change unless different content is being served (maybe ads or something dynamic like that). When you see one with a bad grade, just click on the grade and it will list the resources that had issues and you can see if they are yours.