Start render time slower in repeat view - Chrome

So I’ve been banging my head for days trying to figure it out this problem.

Every single time I run a test with google chrome, the repeat view render time is slower than the first view.
repeat view1
repeat view2
repeat view3

first view1
first view2
first view3

So the diference in render time is noticeable. Usualy first view render time is ~900ms and repeat view is ~1.6s
I don’t know why there is such a big diference, repeat view should be faster than first view.
I also use flush to flush the header so that the browser can render the webpage faster. I tried setting every cache header to explicit tell chrome not to cache the page, the result it’s the same.

Dose anyone have any ideea why there is such a difference?

Try capturing video to see what is being displayed in each case at first render. My guess is that in a repeat view run, Chrome is able to fulfill requests from cache fast enough that it doesn’t yield to paint like it does when fetching from the network but that the first render will end up displaying a much more complete page.

Actually, if you just go look at the screen shot page for each you can see what the start render screen shots looked like and yes, in the repeat view case the images are included.

I see, any way to work around this without setting no-cache to all images? If I set the page to no-cache and add unsleep right after the header is flushed will that work?

I wouldn’t try to optimize it on your side. Go to crbug.com and open a bug for Chrome to have it not delay render in that case.

Alright, thank you for your help

There is nothing wrong. The start render is ambiguous. The visual progress and visually complete is much more important.

Overall your site is pretty quick. You’ve done a good job so far.

On the repeat view the Browser already knows what is coming and what is needed to begin rendering. On a first view the Browser will start with some preliminary rendering as soon as the HTML is received. Quite often the Browser will have to re-start rendering based on the responses to subsequent requests.

Notice on the first view the DOM is loaded AFTER start render. On the repeat view the DOM is loaded BEFORE start render.

Visual Progress may be a better indicator although I do not see that in your JSON results ( http://www.webpagetest.org/jsonResult.php?test=131114_7S_8b881b5b107b0d2c8e60eca22a26db87 ) probably because you chose Chrome. I use IE assuming it will be a slower worst case result.

Your results do show an attribute “VisuallyComplete” for each frame_000XX.jpg which from a users perspective would be most important.

Also you have 37 CSS Errors. This can slow down the first view as the Browser works to attempt to deal with the errors.

Only use border radius if absolutely necessary. It slows the page render time. Same goes for gradients.

You use a transitional doctype, always use strict. Strict has fewer rules for the Browser and will load the DOM faster. You should use XHTML Basic 1.1. if possible.

Also you should be compressing the HTML with gzip encoding. In your PHP add

ob_start(“ob_gzhandler”);

as the first command. this will give you buffering and gzip compression and should reduce
Then after the HTML is transmitted put an

ob_flush();

This may improve your Time to First Byte

You are not caching the HTML if it does not cause operational issues add

header(‘Cache-Control: max-age=XXXX’);

right after the ob_start. XXXX= the number of seconds to cache.

Your index.php creates the HTML with 26% White Space increasing your transmission time by 33%

You should be able to get your TTFB down to 200mS

The Plugrush 2oqt widget may be causing issues. On the first view it my be causing a re-start on the rendering. If you can position it before the images that may help.

You may want to add 7 more sub-domains for your images, e.g img2. - img8.extraxxx
The browser can only load 6 images simultaneously from img1. so you have some serious blocking going on. With 8 you the Browser can load 48 images simultaneously. The Bandwidth is then the bottleneck rather tan blocking.

There are faster Hosting Services. Yours is somewhat slow at 200 Kbyte/sec. I like to see at least 1MByte/sec. For reference Google transmits at about 6MByte/sec.

Don’t do this unless you have a very specific and good reason - it’s a very good way to cause self-induced packet loss by oversubscribing the client connection. In some cases having an extra shard can be a good idea but having to many will just download all of the resources in parallel but really slowly. You’re better off getting things set up so that the images are downloaded with the visible ones first and then letting the others load after (either by order in the html or by explicitly lazy-loading them).

Will Chan (Chrome Networking) has a good write-up on it here: https://insouciant.org/tech/network-congestion-and-web-browsing/

I see you now have only 2 CSS errors, this is good. The bad news is you now have 173 HTML errors. Like I said XHTML Basic 1.1 has fewer rules. But even with HTML5, which has the most lenient set of rules, you still have 83 HTML errors.

I recommended XHTML Basic 1.1 because it is the W3C Mobile Initiative’s choice. I did add “if possible”. I write my HTML to the XHTML standard, but sometimes it is not possible or is more trouble than it is worth.

The biggest HTML issue you have is the selector id attribute value. I assume the reason you use an image url as the id is to make it easy for javaScript to get the name of the image. HTML5 is the only doctype that allows the slash character. But additionally, you have a space between [%] and videos. Whitespace is not allowed in an id attribute with any doctype.

An better way to associate values with an id attribute is to use a numerical value. Although HTML5 allows id’s to start with a number, I still would start with a letter to be compliant with all Browsers. I usually start my id’s with an x followed buy the numerical value, or an x- if I want to embed multiple values and split them on the dash.

I associate an object with the id using an array.

Rather than get the id from the event like:

var id = event.target.id;

then get the numeric value from the id

var globalID= parseInt( id.substr(1));

if I wanted two objects associated with one id then I’d use

id=“x-0-1”

then use index=id.split(‘-’); to set values from an array

var pic=new Array(); pic[0][1] = 'image0-1.jpg';

function setSrc(e){
var id = e.target.id;
index=id.split(‘-’);
document.getElementById(id).src = pic[index[1]][index[2]];
img[index[1]].src=img[index[1]][index[2]]
}

But it is easier to pass the id’s numerical value to the function when, on page load, each object is globally stored in a numerically indexed array.

Ex: associate two id’s with their numerical value which relate to another indexed object

image x0 image x1 var img=new Array(); img[0] = document.getElementById('x0'); //declared global on page load img[1] = document.getElementById('x1'); pic[0] = 'image0.jpg'; pic[1] = 'image1.jpg'; function setSrc(id,n){img[id].src=pic[n];} Sometimes it is more efficient to use a 2 dimensional array. var pic[0][1]='image0-1.jpg' Ex. Sequentially change the image in the mouseover event var img=new Array(); img[0] = document.getElementById('x0'); //declared global on page load img[1] = document.getElementById('x1'); var pic=new Array(); pic[0][0]='image0-0.jpg' pic[0][1]='image0-1.jpg' pic[0][2]='image0-2.jpg' pic[0][3]='image0-3.jpg' pic[1][0]='image1-0.jpg' pic[1][1]='image1-1.jpg' pic[1][2]='image1-2.jpg' pic[1][3]='image1-3.jpg' var nIntervId; var globalID; var index = 0; var max=4; function mouseOverPic(id){globalID=id;nIntervId=window.setInterval("picTimer",750);} function mouseOutPic(){clearInterval(nIntervId);} function picTimer() { img[globalID].src = pic[globalID][index++]; if (index==max){index=0;} } I am very anal when it comes to efficiency and try not to use branch commands (e.g. if else). I would replace the the "if" with an array value. var nextPic = new Array(1,2,3,0); function picTimer() { var pic[pic].src=img[video][index]; index=nextPic[index]; } [hr] [quote="pmeenan, post:8, topic:8450"] [quote="iSpeedLink.com, post:7, topic:8450"] You may want to add 7 more sub-domains for your images, e.g img2. - img8.extraxxx The browser can only load 6 images simultaneously from img1. so you have some serious blocking going on. With 8 you the Browser can load 48 images simultaneously. The Bandwidth is then the bottleneck rather tan blocking. [/quote] Don't do this unless you have a very specific and good reason - it's a very good way to cause self-induced packet loss by oversubscribing the client connection. In some cases having an extra shard can be a good idea but having to many will just download all of the resources in parallel but really slowly. You're better off getting things set up so that the images are downloaded with the visible ones first and then letting the others load after (either by order in the html or by explicitly lazy-loading them). Will Chan (Chrome Networking) has a good write-up on it here: https://insouciant.org/tech/network-congestion-and-web-browsing/ [/quote] Good point. That was something recommended to do in a W3C Best Practices course. I passed it in without thinking it through. I did preference it with "may" rather than "should". I can only guess how the Browser programmers implement features. In theory it is conceivably a good way to speed things up. Not knowing about the "Slow Start Congestion" issue I went under the assumption the Browser programmers would figure out how to make it work. The problem is that there are too many entities that do not play well with others (e.g. Microsoft, Apple). That said there should be a much better solution to fixing all the current inherent page load delays. I was impressed with William Chan's article. He thought it through for me. Nice to know there is someone out there trying to solve these issues. In the waterfall chart it is so typical to see a lot of TTFB green and very little Transmission blue. The page under test in this forum topic accentuates the problem. Even though his server is rather slow (under 200KB/sec) the green to blue ratio is still very large. I now wonder if CSS Sprites would help? I would like to see the IETF come up with a protocol making it easy to combine multiple page HTTP elements in to a single connection and eliminate the subsequent Connection and TTFB If they started today they might have something workable in ten years.

They’re currently developing HTTP 2.0, based on SPDY, which will allow for the type of multiplexing you describe.

Thanks for the tips.

The reasons why I do not wish to split the static images on multiple subdomains is because, the images will load random. If I have 1 to 20 images and I add multiple subdomain for the images on a slow connection it will load like this (1 9 2 20 18) as to where I would have one single subdomain. With one subdomain they loading is progresive, it will load the first 4 and then the next 4 and so one. I might implement lazyload later.

I did fix the css errors and used html strict (with a few errors still displaying) I did not see any improvment for the start render. I have not tested to see if the domContentLoaded is faster, which I’m guess it is, I’m to lazy to revert back and to test it.

My server speed is around ~ 2MB/s. I’m also using couldflare, which adds more to TTFB but it’s worth if for the cdn. Also did anyone tested the cloudflare railgun? I’m curios to how much would it reduce TTFB. I can’t seem to find any real test done

Also I did not know that //,% was not permited in element id, again thank you all for the tips

Yes I tested and found the sub-domains would not help. That was a “tip” I picked up in a W3C course and di not think it through. I have since done a lot of testing on your page. I created a similar page with 48 thumbnails (you have 44) with sizes from 5K to 15KBytes (larger than yours) and got the page to load in 0.500 - 0.700 seconds.

http://www.webpagetest.org/result/131126_RA_17E8/

I did not like the quick changing from one image to another in the mouse over slide show so I added fade in and fade out.

The quick image change gave me an uncomfortable feeling. It’s very subtle feeling but I have become attuned to such things.

The human mind does not work that fast. Eye Fixations keep the old image in the mind for about 200 milliseconds before it can even begin to see the new image.

This is a page from the book Mind Hacks. It explains the eye fixation issue.

My plan is to write a blog entry regarding this page. I found a few ways to get the page to load in less than 1 second.