Looking for even more performance; realistic?

While I spend considerable time on implementing every possible speed optimisation I can think of, the results disappoint me.

Before implementing all optimisations, the load time was 0.665 seconds and start render time 1.089s:

With all optimisations, performance increased marginally to a load time of 0.606s and a start render time of 0.892s:

Here’s the visual comparison between both tests:

I have two questions that I’d like to get your opinion on:

(A) Is it realistic to bring down the start render time further? I was hoping of hitting 0.6-0.7 seconds.

(B) When it’s realistic, what other optimisations can I pursue? I can honestly say that I implemented everything I know, so I’m not sure how to proceed here.

Thanks very much for your input!

I was looking into it more this morning, and am now even more confused. :slight_smile:

In 2016, my frontpage loaded in 0.415s with a start render time of 0.592s:

Since then I’ve done several things to improve/keep good performance, including:

  • Inlining critical above-the-fold CSS,
  • Defer load non-critical JS,
  • Locally host Google-analytics script,
  • Conditional loaded Javascript,
  • Preconnect google-analytics.com,
  • Create a CSS image sprite with the most requested images,
  • Uglified and minified CSS, JS, HTML aggressively.

But now the load time is 0.390s (so the same) but the start render time increased to 0.886s:

I really don’t get why now ~400ms is spend between downloaded the page and the start of rendering it, especially with a tiny amount of inline CSS and a page size of just 3.23kb. If anything, I’d expect the gap between load and start render to shrink (and not increase).

I’m not sure why you would want to optimize the site that much as those improvements won’t be noticeable to end users. Unless it’s just an exercise for it’s own sake.

It’s best to run a few tests (at least 3) to get an overall picture, as there will always be some variability.

As you can see from the two tests in your last post the main difference was the connection to Google analytics, the browser render time can also vary in webpagetest.

For further optimization you could inline all CSS as you don’t have much. Also test the site without the defer scripts/css, to see if all scripts/css download in parallel.

In reality there’s probably no advantage in loading GA locally as it will likely be in the users browser cache anyway.

[quote=“clubberz, post:3, topic:10334”]
I’m not sure why you would want to optimize the site that much as those improvements won’t be noticeable to end users. Unless it’s just an exercise for it’s own sake.[/quote]
That’s true, it’s more like a hobby and challenge for me than anything else. Plus I want to get at least as good of a performance when I started implementing speed tweaks, because otherwise the additional work (like creating image sprites) doesn’t need to be done in the future.

[quote=“clubberz, post:3, topic:10334”]
It’s best to run a few tests (at least 3) to get an overall picture, as there will always be some variability.[/quote]
Thanks, I’ll take that more into consideration.

Yesterday I’ve already tested different ways to load JS (async, defer, or DOM script injection) but performance didn’t seem to differ much.

I’ve just tested inlining all CSS versus putting all CSS in a separate file. I don’t see much differences; the first five bars are with CSS inline, the last 5 are the results for putting the CSS in a separate file.

The worst performance when inlining all CSS is as worse as putting all CSS in a separate file. And the typical performance difference seems to be in the tens of milliseconds. If it wasn’t for that single orange bar that showed good performance when inlining all CSS, I’d say the performance doesn’t matter for the webpage I tested.



What are your thoughts on choosing which webperformance to use when the results are virtually identical?

That’s probably true, but I host Google Analytics locally so that I can set it a caching higher than 2 hours, which is something that Google Pagespeed would otherwise complain about. I agree that normal website visitors don’t notice performance benefits from this (and likely already have it in their browser cache), but am not sure how search engines look upon that (i.e., the Pagespeed score).

I didn’t even know about that graph. Thanks for sharing that. There’s still something blocking the rendering can you:

  1. Move the async js to the footer before the closing html tag (there’s so little of this you could actually inline this and the css file)
  2. Remove the data-uris for images and set the data-src to src for the images. Chrome will download the elements it think it needs when it first receives the html file. In this case the image src links are being added via js so are being downloaded later in the waterfall. Hence why the images load after the js file and not in parallel.
  3. Change the connection speed to FIOS to see what the time would be like on a fast connection.

I wouldn’t worry about the PageSpeed Insights score, it’s not directly related to the load time eg a higher score doesn’t mean it’s faster nor does it relate to SEO. It doesn’t take into account the performance benefits of http/2 either at the moment. It’s just suggestions on what could be slowing down your site, better to analyse the waterfall instead to see the load impact of each resource.

Thanks for your reply. I’ve removed the data-uris for the images, but I’m a bit on the fence how much this helped, also because the effect of cache MISS is bigger now for images. But the effect might be cumulative once I change the JS code delivery.

However, if I should summarise this topic in one question it will be: how to get the start render time before the load time?

Because while my website performance is not bad, the page starts rendering at about 1 second even when I disable all Javascript and the defer loaded CSS file:



Now if I compare that with another website in my niche, I see that while that other website has a much higher load time, it renders much quicker and therefore feels quicker too:



If I compare these two tests, the visual progress of that other webpage is much better:

(It’s of course debatable how bad it is that a user has to wait 1 second before the blank page starts showing something, but I’d like to get the start render time around or before the load time.)

I’m getting a bit confused here. :slight_smile:

I learned here that the domContentLoaded event happens when both the DOM and CCSOM are ready. My domContentLoaded happens at 0.302s and domInteractive occurs at the same time.

The DOM and CCSOM are then combined into the render tree, whose layout is then painted on the screen.

Does that mean that my tiny inline CSS and small HTML page take almost 0.7 seconds to render (start render time - domContentLoaded)? That’s very long, especially since this webpagetest was performed on a desktop browser (mobile would be much slower to process, I figure).

Looks better. Try testing using the Dulles, Thinkpad location it’s closer to the rendering power of modern desktop. I think the Dulles, VA location is a little under-powered.


I didn’t block the defer script above, so could potentially save more time off this.

The rendering time might not be seen as much for the competitor’s site as it happens while the external scripts are being downloaded.

Thank you very much for this comment! It really helped me to put things into perspective; I was too tied up in using that testing location and considering the webpagetest results as “the truth”.

After seeing that test result I’ve become a bit more sceptical of the webpagetest results (in other words, not relying on it too much) and made a couple of website changes:

  • No more hosting of Google analytics locally (like mentioned earlier in this thread).
  • No more JS file that’s loaded deferred, but load the JS async now (for earlier fetching). (I found that the start render time is really held up by the CSS and HTML itself, not the JS so I might as well load it async without script injection.)
  • Removed one JS file and the JSON request.
  • Put all CSS in one file that’s referenced externally.

I know that latter is not the very best approach, since inlining the CSS would/should give better performance for the first page view. However, when I look at the load time of the CSS file and the small amount of CSS that I’d otherwise defer loaded, I estimate that this costs around 75-100ms of the initial page load.

I could optimize that further, but with that small benefit I’d rather have the CSS external for better browser caching: my pages have a browser cache of 30 minutes, while my CSS is cached in the visitors browser for 6 months.

Here’s the Dulles VA Thinkpad result (Chrome, FIOS connection):
And the Dulles VA (Chrome, Cable) result:

If I’d only look at those webpagetest results, then the performance would still be quite bad (start render time in excess of 1 second for such a lightweight static HTML page). But I can’t really match those performances with how the website feels when I browse it myself (with caching disabled in Firefox or Chrome), and I also see that Pingdom puts the same page at a load time of 0.3 seconds:

My remaining question: Is there an optimisation I have overlooked so far? I’d of course would still like to improve the start render time if that’s possible, but I feel that I checked most boxes (that I can influence) even though Webpagetest doesn’t agree yet. :slight_smile:

(Thanks again for your help Clubberz!)