I liked the response as to the simplicity of a wordpress plug-in and it was easy to implement, but why is my load time slightly slower and shouldn’t that of improved? Most sites I have seen with similar scores have load times of only 1 to 3 seconds. Any other great bits of advice :)? This forum and site rocks, and thanks for all the great advice and helping me get such better test results!
Watch out there. Your before is using IE8 and your after is using IE7. It tells you this at the top, but you can also tell because your before is handling requests with 4 simultaneous connections, whereas your after is handling only 2. This can highly skew your results.
Oops, you are right…I re-did the AFTER using the IE 8 for both and edited the comparison of the two above, but it left me additional mountains to climb…I am also going to try to make MaxCDN work for me again…I talked to technical support and the problem was coming from Host Gator when they created a CNAME for me.
Wow, that is an odd result there, had yet to ever see the render start after the document has completed. My best guess would be some of your inline javascript being the cause.[hr]
Yeah, those results make a bit more sense. As for maxcdn, keep in mind you don’t necessarily need to mess with cnames unless you want to hide that you’re using maxcdn. I simply just used the default url maxcdn provided for each pullzone i created. This also has the benefit, as you realized, of simplifying the total system so that there is less likely hood of problems.
My guess would be a combination of factors. Since IE lazy-draws to the screen, the testing on FIOS changes the content fast enough that it doesn’t get a chance to try to draw the screen until around doc complete but then it looks like the CPU gets completely pegged and you have some javascript (or something else really intense) firing onLoad which is blocking the main UI thread from painting as well which brings it out even further.
I’d take a look with Dynatrace Ajax edition which is a great javascript profiler for IE and it will tell you what the problematic code is. Another option to hide the problem would be to further delay your onLoad code with a timeout timer but I wouldn’t recommend doing that until you have fixed the code - the long execution will block the user from being able to do anything with the page as well.
I’ve seen it happen on a few sites so it’s not a completely unique situation.
I think you have taken care of most of the easier things (besides getting MaxCDN to work). I don’t know if I’d bother with minifying either the css or js but if it’s not breaking anything then it’s fine to leave it on.
Improvements from this point out may be outside of what you can do from within Wordpress:
You have quite a bit of javascript loading. If you could load the javascript asynchronously or at the bottom of the page it would help the user experience but I don’t think wordpress gives you that level of control
You could sprite some of the images/logos together but again, that may not be something that is easy to do within wordpress.
Try not to compare performance based purely on the grades. They will help you make a given site faster but two different sites may get the same grades and have completely different performance. They’r more of a starting point on the basics - the waterfall is really where you should spend your time once you get used to reading them.
That was the exact response I was looking for :D…it’s funny that you said that because I posed this exact question to them but they had not responded yet…they figured it was an error that hostgator made with the CNAME considering the default url http://jupiterflhomes.jupiterfloridaho.netdna-cdn.com loads fine.
DO you know if using MaxCDN will conflict with PHP Speedy or should I be fine with both together?
I appreciate all your help and I got MaxCDN up and running…they have great customer support by the way and they helped a lot. Here is how most of my tests are looking now WebPageTest Test - Running web page performance and optimization tests... but some of the USA ones rank an “F” for CDN, overall I am happy with all the improvements. Now I have to figure out how to analyze the waterfall :D!
Sorry, San Jose and China don’t have the latest code installed (which recognizes MaxCDN). Should be updated in the next week. It’s a detection problem not a problem with your CDN implementation.
I have been reading over this post to try to learn about the start render time and how to lower the value to make things appear faster.
So it sounds like having little in the head as possible is a great way to reduce the start render time. So this would mean JavaScript would be better suited at the end of the body as opposed to the head section (even if it is asynchronous calls). Why do I always see it being encouraged to have JavaScript in the head?
Looks like I can play around with flushing and perhaps get everything above my header displayed quicker.
You do have to be careful about dependencies for inline code though. It’s “easier” to have it in the head because you can guarantee that the code is loaded before any inline javascript on the page is executed.
In tests I’ve done with flushing (was November last year) I’ve found that you need to be careful about the size of the initial chunk of html being sent down.
Each browser has a minimum amount of html code that needs to be received before it will start parsing the code(http://www.stevesouders.com/blog/2009/05/18/flushing-the-document-early/ - see comment 8). So you may need to pad the initial chunk so that it will cater for the vagaries of the browser in order to initiate parsing (and get a head start in the downloading of extra resources).
A second quirk I found was that this minimum size seems to be an ‘on the wire’ size. E.g to take ie7 on webpagetest (needs 128B) the chunk would have to be 128B compressed size or 128B uncompressed size. I’m pretty sure I found the same behaviour on Firefox & Safari as well. So the efforts to pad a chunk to a minimum size may get borked by compressing the content sent to the browser, forcing you to perhaps add more padding than should be necessary. If you are using the php filter to do compression it would be interesting to see if you can send an initial chunk uncompressed, perhaps padded to 2KB size to capture chrome, and the rest of the chunks compressed…not sure how feasible that is.
I am curious how the players such as Google/Yahoo/etc have implemented their early flush strategies. If anyone has any links/info I’m all ears.
An interesting point from Velocity this year was regarding rendering of a page and divs. It was mentioned that rendering would be blocked if the page was wrapped in a large root div and that removing the root div and having the page in smaller div sections allowed for the sections to be flushed individually and rendering to follow on. I haven’t played with this yet. I’m not sure if it blocks the time to start render or whether it just blocks progressive rendering of the page.
I did end up padding the output at first, but in order to speed up my start render times i started adding more data in the head section before the first flush and eventually had enough data there that it wasn’t necessary to pad it. I found that throwing in the .css file first (just after the doctype of course) greatly reduced my start render time as the .css file got downloaded long before the bulk of the page processing got started. I also added some dummy link tags that force some form of dns prefetching for all of the domains i’m using. All in all it added up to be around 530 bytes and compressed to be around 300 bytes (a lot of it is the doctype, which is xhtml so its kind of long). Though i wonder if comment #8 in that link is correct, 2KB for chrome is quite a requirement, and one must wonder if chrome in particular looks for 2KB of total payload, compressed/uncompressed, or not as you said. After a bit of searching though i found this: http://www.kylescholz.com/blog/2010/01/performance_implications_of_charset.html
Which gives lower figures than previously reported. Their figures are still high but only if the char set http header is missing in the first place, if one is added through the http headers or through a http meta equivalent, buffering was decreased considerably.
Interesting, are there any presentation materials or videos from the conference online for that? Would be interested to look further into that.
it is about 2/3rds of the way down, no video I’m afraid. I was disappointed by the lack of video for a lot of the talks at Velocity, my biggest criticism for the 2nd year running. There is so much good stuff being said at the same time that its impossible to see everything.
I didn’t know about setting the charset in the response headers. I shall have to investigate