Green-Watch.org Site Optimization

Greetings,

My name is Travis Walters. I am the owner of Green-Watch.org. It is in the process of becoming an environmental website. People will be able to connect with green businesses, purchase green products, and learn how to enrich their life with new ideas.

First off, I would like to say that I ABSOLUTELY LOVE this page speed testing website. I have been a programmer for about 5 years now and I have not really taken page speed into consideration into building my websites until recently.

The first time I ran the page speed tool on this website I found there to be a large amount of time to get to the first byte of any webpage on my website (for the first run). I did not understand what this first byte concept ment. The website always loaded fast for me on subsequent runs so I did not know the problem existed. I ended up running a blank coldfusion file through the page speed tester and that first byte time was still there. That is when I realized this must be the time it takes for the Application.cfm file to process queries. On the first run, my website captures the IP address of a user to find geographic location to give better search results. Instead of returning the one result it needed right away, it was reading the entire database table. It was very inefficient so I fixed that issue and the time to the first byte is now a lot less.

  • One suggestion that would be nice for webpagetest.org is an easy-to-find description of what the first byte is. If the first byte takes longer than 2 or 3 seconds, a hint might pop-up that says you might want to check your database queries to make the first byte take less time.

I also combined about 5 javascript files that were in the header section of my webpages. I noticed over an entire second came off the load time. I guess this is because browsers wait until JavaScript files are loaded before processing the rest of the page? The merged JavaScript file is not minified or compressed yet. I will see what impact that has on page speed shortly.

I also plan to come up with a large list of questions I have about page speed that may enlighten other programmers that are just learning about this page speed concept.

Sincerely,
Travis Walters

Thanks. I’ve been debating to add a “back end performance” checklist item to catch egregiously slow first byte times. The main hangup is in figuring out what a reasonable threshold would be. I’m probably going to make it use a multiple of the socket connect time which should be the bare round-trip time and then flag anything that takes longer than 2 or 3x that for the request time.

Even though it is more common that the front-end components are causing performance issues, I’ve seen enough cases (particularly with WordPress installs) where the first byte time is ridiculously long that it should be brought to people’s attention.

The part that REALLY needs work is the documentation to tell people how to fix various things. I have a wiki set up that I am starting to populate with content. Once it has enough to be useful I’ll be exposing that as well.

Thanks,

-Pat

Hey there,

So I have most of my JavaScript merged into one file:

http://www.green-watch.org/javascript/mySiteScript.cfm

Neat little trick for coldfusion developers:

<cfset dExpiryDate = CreateODBCDateTime(DateAdd("d",7,Now()))>
<cfheader name="Expires" value="#GetHttpTimeString(dExpiryDate)#">

<cfif cgi.HTTP_ACCEPT_ENCODING contains "gzip"> 
  <cfheader name="Content-Encoding" value="gzip"> 
  <cfcontent type="application/x-javascript" deletefile="no" file="#expandpath('compressed/mySiteScript.js.gz')#"> 
<cfelse> 
  <cfinclude template="uncompressed/mySiteScript.js"> 
</cfif>  

The GZip file is 88 KB and I would assume most people use GZip-Enabled browsers. The uncompressed file is 366 KB. If you look at the source code for the file, is there anyway to condense this size even more? I tried using the Dean Edwards Packer on http://jscompress.com/ but it seemed to give me errors on the website.

I would gladly PayPal $10 bucks to the first person who replies with a lot more compressed (yet functional) version of this javascript file. If you take me up on that offer, please include the javascript file, your paypal email address, and directions on how to take the uncompressed version and make it compressed. I need to be able to duplicate the method for future code changes.

Here are a few questions for the community:

#1 - YSlow gives me a grade E on “Add Expires headers”. When using external images from another domain, is there anyway to set an expire header, or does the image have to be downloaded to my server first?

#2 - Do you recommend using CSS sprites to reduce the # of HTTP Requests? If anyone is interested, I have a project that involves creating one. I have about 7 images that are used as background images. These probably could be combined into a sprite. I would need the sprite created and the css modifications necessary for the background to appear identical to the way it is now. Let me know if you are interested.

#3 - What are ETags and do you recommend using them? I see YSlow docks you for not having them.

#4 - YSlow also docks me for having too many DOM elements. Is this a common problem? Should I be concerned with reducing the number of elements on my webpages?

#5 - Google Page Speed says I can remove a lot of unused CSS on my home page. I thought having all CSS in one location is a good thing? The cacheable file would only have to be downloaded once and it is only a single HTTP request?

#6 - How do you fix the following error?

[quote]Due to a bug in some proxy caching servers, the following publicly cacheable, compressible resources should use “Cache-Control: private” or “Vary: Accept-Encoding”

Consider adding a “Cache-Control: public” header to the following resources:

#7 - Using the webpagetest.org speed test, how come there is a gap after the page completes? There looks like there is a 2 second interval where nothing is downloaded at all here:

http://www.webpagetest.org/result/100425_7YZF/1/details/

#8 - In Google Webmaster tools, are there page speed estimates based on document completed or fully loaded?

Thanks in advance for any information.

Sincerely,
Travis Walters
twalters84@hotmail.com
admin@green-watch.org

It looks like your JS file is already minified, so you probably aren’t going to be able to compress it further. Your best bet on that front may be moving to some sort of JavaScript library where people have already optimized code to a high degree. YUI is a great one, and it allows you to load JS on demand which can dramatically boost load time.

As for your questions:

#1 - You would need to download the images to your server in order to configure the expires header.

#2 - Yes, you should most likely sprite your background images. I’m happy to help with that.

#3 - There is a great description of ETags here. in most cases you do not want them.

#4 - Reducing the number of DOM elements will make your JS execute faster and make the browser paint the page faster. You are barely over 1000 so you aren’t in terrible shape but reducing the number would definitely help.

#5 - It’s a balancing act, you have a very small amount of CSS so I think you will be better served by keeping it in one file - you would definitely see an improvement by minifying and gzipping this file however.

#6 - You would need to configure your webserver to send out different HTTP headers with that content. You cannot configure headers for the external images (see #1).

#7 - Gaps like that are usually due to JavaScript executing. You have quite a lot of JavaScript, so that could easily be the problem.

#8 - They are looking for the Document Complete event - all of their data comes from people using the Google Toolbar who have opted in to the advanced features.

Hey Jklein,

The JavaScript file on my website currently includes:

  • JavaScript that I wrote myself to handle common things on the website
  • SwfObject Code
  • Zapatec JavaScript Menu
  • Scriptaculous JavaScript Library + Prototype.js
  • Tooltip Library Based on Prototype.js

The Zapatec JavaScript Library is about 180 KB uncompressed and I think the Prototype+Scriptaculous Library is around the same size.

Depending on the price, I would not mind replacing the Zapatec menu with a CSS based menu if the functionality can be the same.

Scriptaculous+Prototype JavaScript Library could be replaced if a tooltip system, autocompleter, and a draggable element system were created.

I am not sure how good you are at JavaScript, but if you are interested, please feel free to give me a quote on all or some parts.

That is something I would most definitely like to do. How much would you charge for this? I believe there are about 8 background images.

However, if we can get the menu CSS merged with the main style sheet, the new sprite could be merged with the sprite for the menu.

Gzipping the style sheet is something I am definitely going to do. I have been making a few minor style sheet adjustments recently.

In the webmaster tools, it is saying my page speed spiked to about 14 seconds so I am really concerned with page speed right now. I have adjusted the robots.txt file so only the main directory is exposed to crawlers. I figured I could find the script causing the page speed to increase drastically by doing that. However, I suppose it would not do that if all their data comes from the toolbar.

Thanks again for the reply and I look forward to hearing from you again.

Sincerely,
Travis Walters

Hello Again,

Few more questions for the community…

If Google is looking for a document complete event, what would stop a website from doing something like:

TestPage.cfm:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">

<head>

  <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
  <title>Page Title</title>
  
  <script type="text/javascript">
  var myDirectory = "http://" + document.domain;

  function GetXmlHttpObject(handler)  
  {
     var objXMLHttp=null
   
     if (window.XMLHttpRequest)
     {
         objXMLHttp=new XMLHttpRequest()
     }
     else if (window.ActiveXObject)
     {
         objXMLHttp=new ActiveXObject("Microsoft.XMLHTTP")
     }
   
     return objXMLHttp
  }  
  
  function importPage()
  {
     xmlCheckHTML=GetXmlHttpObject();
   
     if (xmlCheckHTML==null){return}

     var myElement = document.getElementById("myImportedPage");

     url=myDirectory+"/testPage2.cfm";
     xmlCheckHTML.open("GET",url,true);

     xmlCheckHTML.onreadystatechange=function() 
     {
	    if (xmlCheckHTML.readyState==4 && xmlCheckHTML.responseText != '')
	    {
		   myElement.innerHTML = xmlCheckHTML.responseText;
		   myElement.style.display = "block";
	    }	
     }    
   
     xmlCheckHTML.send(null);
  }  
  </script>
  
</head>

<body onload="importPage();">

  <div id="myImportedPage"></div>

</body>

</html>

TestPage2.cfm

This is the page contents that should be loaded after the document complete signal is sent. Anything at all could go here...

The document complete message would be sent almost instantaneously only after the initial connection and DNS look up and then the entire contents of the page would be sent after the complete.

Unless I am interpreting what you mean wrong, from Google’s perspective, I would consider this a black hat trick for page speed.

On my home page, I do have images loading after the document complete message gets sent. This improves the user experience in my opinion. Users do not have to wait for these images to download and can still view the rest of the page.

Will Google setup some sort of guidelines to say what can and can not be loaded after document complete? Where do they draw the line? How do they enforce something like that? People always do black hat tricks to get ahead in the SERPs - I am not one of them.

Thought this would be an interesting discussion especially since Google is starting to factor in page speed in its algorithm, even if it is just 1 percent of the calculation.

Sincerely,
Travis Walters

Hey there,

So I am looking around seeing what is available to replace the huge javascript libraries on my website.

Autocompleter:

http://www.dhtmlgoodies.com/index.html?whichScript=ajax-dynamic-list

I could probably get that working by myself.

If anybody wants to work on a CSS based tooltip and / or menu please let me know. Job opportunity is available here.

Sincerely,
Travis Walters

Two quick suggestions:

1 - Not sure what your skillset is but you may be able to do the sprites yourself. Doesn’t sound like you have a complicated implementation and SpriteMe may be able to do all of the heavy lifting for you. It is a firefox bookmarklet that can automatically generate css sprites and will even give you the css changes to get it working.

2 - You may not need to completely replace your javascript libraries. You could probably get a lot of the benefit just by moving them out of the head, loading the libraries asynchronously and using progressive enhancement to add the functionality to the page.

Here are a few blog articles that may help:

http://www.artzstudio.com/2008/07/beating-blocking-javascript-asynchronous-js/

http://www.stevesouders.com/blog/2009/04/27/loading-scripts-without-blocking/

I agree 100% with both of Pat’s points - I had no intention of charging you for making sprites. There are good online tools like SpriteMe, and it’s fairly easy to just make them yourself with a free tool like Paint.NET. You have few enough background images that this wouldn’t take long at all.

The main reason why I was advocating using a library like YUI was because you can probably get all of the same functionality you have now with much less overall JS. Loading scripts asynchronously would help, but reducing the overall amount of JS will get the page to a fully usable state faster (and YUI has the asynchronous loading build in).

As for the Google Page Speed bit, like you say the robots.txt file has no impact on the load time shown in webmaster tools. You are also correct that in theory you could load your entire page asynchronously and thus force the document complete event to fire early. There are two reasons why you likely wouldn’t want to do this:

#1 - This would likely increase the overall load time of the page (in real life), since you are creating an artificial HTTP request to fetch the page content, and it would be faster to just load it all on the first page load.

#2 - While Google has said that they are using page speed for search ranking they also say that they “use a variety of sources to determine the speed of a site relative to other sites”, so improving your numbers in the webmaster tools report doesn’t necessarily help your search ranking, it just gives you junk data.

Hey there,

Thanks for the responses. I have a few more questions.

I did create a menu icon sprite for all the 16x16 icons in the javascript menu. My skill set is quite large but CSS is probably my least favorite thing to do. I did look at that SpriteMe application. It did not recommend making a sprite out of background images due to some being JPEG images and others repeated themselves. JPEGS can easily be converted but can a sprite handle repeat-x and repeat-y images?

When people say javascript files should be merged together into one file, do they mean all synchronous javascript should?

Let me shoot an idea and tell me what you think.

  • Currently all JavaScript is contained in one huge file. It is minimized, gzip compressed, and takes about one second to load your FIOS test. It is contained in the head section so it does block the website.

  • This JavaScript file contains form functions, prototype.js, scriptaculous.js library, prototip library, swfobject, and zapatec.js menu.

  • I could setup five functions: importFormJS(), importPrototypeJS(), importScriptaculousJS(), importPrototipJS(), and importZapatecMenuJS()

  • In each function I could have a XMLHttpRequest that would request gzip files of the libraries. The XMLHttpRequest would be asynchronous so it should not block other resources from being downloaded correct?

Would this method work to stop (or reduce) blocking? Would creating these requests be a bad thing to do? Maybe I am thinking of the wrong way to do this.

Anything information will be greatly appreciated. Thanks again!

Sincerely,
Travis Walters

It really depends on what’s in the images. If we are talking about graphics as background images then they should probably all be sprited into one PNG8 (and thus converted from JPEGs if that’s what they are now). If we are dealing with photographs then you need the larger color pallet of a JPEG, and you probably don’t want to sprite them (unless you have lots of small JPEGs that are all on the same page). You can repeat sprites easily, but it will repeat the entire sprite not a specific section of it. So if you have repeatable gradients or something your best bet is to put all of the gradients in a repeat sprite (could be 1 pixel wide if they are vertical gradients).

People usually mean that you want to reduce the overall number of HTTP requests, but yes it typically applies to synchronous JS. JS files that are loaded asynchronously on demand can be split into feature specific files without any problems.

The issue with your example is that you might have some JS inline on the page that requires external JS to work. If this is the case you would run into a race condition where the external file is racing to download and the page is racing to execute the code on it. There are ways to tie external JS loading to a script block on the page, so you just have to be careful with how you manage your dependencies.

So yes, you could load all of your JS asynchronously and it would reduce blocking and make your page load faster, but you have to make sure that everything still works on the page when you do this.

Hey there,

http://images1.green-watch.org/img/jpg/bodyBG.jpg
http://images2.green-watch.org/img/jpg/register1.jpg
http://images3.green-watch.org/img/jpg/register.jpg
http://images2.green-watch.org/img/jpg/bg_body3.jpg
http://www.green-watch.org/img/png/buttonBG.png
http://www.activegreen.net/img/png/bg_body11.png
http://www.green-watch.org/img/png/bg_input_1.png
http://www.green-watch.org/img/gif/bt_logout.gif
http://images1.green-watch.org/img/gif/bt_logout-over.gif
http://images2.green-watch.org/img/leaf.gif
http://images1.green-watch.org/img/menu/menuIcons.png

Main Style Sheet:
http://www.green-watch.org/inc/style.css

Style sheet called dynamically from the zapatec library for menu:
http://www.green-watch.org/zapatec/zpmenu/themes/barblue.css

Let me know what you think. If it is possible to create one or two sprites off these to reduce the number of requests, I do not mind paying for this job as long as the price is reasonable.

It would be nice to be able to merge the style.css with the barblue.css file but I do not know how difficult that would be to adjust the zapatec menu javascript so it would not call the barblue.css. Maybe its best not worrying about that file since it is so small?

I did a quick asynchronous test on the combined JavaScript file and I can see a drastic difference already. Some of the JavaScript does not work because of the race condition you mentioned, but I have no problem putting a little effort into fixing that. The good news is my coldfusion pages are built on templates so changing the templates accordingly fixes most of the pages at once.

I will share my results once everything is up and running. Webpagetest.org has really reduced the time it takes for my pages to load with just a few simple changes. I think with a little more documentation and more people in the community (which should be occurring soon due to Google implementing speed as a factor for SERPS), this will be an extremely nice website. I love it so far!

Sincerely,
Travis Walters

Yes, but you will need to have different sprite images for the repeat-x and repeat-y sprites.

This is where “advanced optimization” collides with the easy recommendations for typical sites :slight_smile:

The blog entries I linked to had samples for pulling down and executing the javascript dynamically (the key is in the execution). the loading of the javascript itself is easy but where it requires specific application knowledge is around what the javascript code does.

You can not execute any javascript that is expecting the prototype library to have loaded before you actually load the library for example so inline script blocks can be troublesome when you try to delay loading of the libraries. You have to look at the code that uses the libraries, not just the libraries themselves and figure out how best to execute that code only after the library has downloaded. The blog articles provide sample code that shows how that could be implemented and it is supported somewhat out-of-the-box by some of the UI libraries (YUI for example).

When you start getting into async loading of javascript you need to have a developer look at it (unless you have the necessary development skills). t is well beyond the typical copy/paste of widget samples.

I would make two sprites:

Sprite 1:
register.jpg
register1.jpg
bg_input_1.png
bt_logout.gif
bt_logout-over.gif
leaf.gif
menuIcons.png

Sprite 2:
buttonBG.png
bodyBG.jpg
…any other vertical gradients you have

Then if you really care about performance I would probably split bg_body3.jpg into two gradients and put them both in sprite 2. Sprite 2 could then be 1 pixel wide and would be extremely small file size wise. Both sprites should be PNG8’s, and when they are done you should run them through smushit.

Combining the CSS files is probably not worth it like you say, since it would be a lot of work for little gain.

If you purchase a new domain (www.gwatchimg.com for example), host all of your static content there (images, JS, CSS), and never set cookies on it you will probably also see a performance improvement. Making all of these changes should have a significant impact on your page load time.

I would encourage you to this yourself. It will help you do this kind of thing in the future without having to rely on others or pay anyone, and you will have a deeper understanding of how this kind of change affects your site performance (and you will be able to evaluate the ROI).

I’m heading out of town for a few days and won’t be monitoring this thread, but I wish you the best of luck. I’m sure that Pat can answer any questions you have (I’m fairly new to this forum after all :)).

Hey there,

The necessary JavaScript changes are starting to come along. Looks like inline JavaScript are causing a lot of race conditions. However, at least they are fixable and in the long run this will all pay off.

I think I will outsource this small project on rentacoder. I love hardcore programming and I think this would better be left to a designer (:

Let me shoot some statements and others can write back if they want and say if I am right or wrong in my assumptions.

#1 - Several JavaScript files loading asynchronously will load faster in most cases (where the files are large) opposed to one huge JavaScript file loading asynchronously.

#2 - The exception to statement #1 would be when the initial connection or DNS lookup takes a large amount of time for one or more of the JavaScript files.

#3 - All asynchronous JavaScript files will be loaded by the time the “document complete” message gets sent. I am wondering this because I could use a “body onload” script that assumes libraries are loaded.

#4 - When loading JavaScript asynchronously, can XMLHttpRequest load scripts from different domains or does it have to be the local domain? What about subdomains? I am wondering this because I can get rid of cookie data in the request if either a different domain or subdomain can be used.

#5 - Do CSS style sheets block other resources from being downloaded like synchronous JavaScript does?

#6 - Can CSS style sheets be loaded asynchronously or would that cause the rendering to look very strange? I do realize I can GZip my style sheet but I am wondering about other possible tweaks with the CSS sheet.

Thanks once again for any information.

Sincerely,
Travis Walters

Probably not. What it will give you is the ability to enable pieces of functionality separately but the overall time will be longer. If they are all being served from the same domain as the rest of the resources more requests will also consume the limited connections to the server.

Possibly, but it depends on how you inject the javascript. If you use XMLHttpRequest then it will not guarantee delivery before doc complete (and you’ll want to use a callback handler to execute code when it finishes).

You should probably be looking at the methods that manipulate the DOM and just insert the reference to the code though. Then you can just include whatever you want executed at the end of the code that gets loaded asynchronously and it will be guaranteed to execute when the file is loaded and evaluated.

I wouldn’t recommend using XMLHttpRequest to do your javascript loading. The Google analytics snippet code has a good (well tested) example here: http://code.google.com/apis/analytics/docs/tracking/asyncTracking.html

Just modify it to load your code instead of the analytics code (and remove the analytics-specific variables).

Not directly but usually everything in the head will block loading of anything in the body (and you’re still constrained by the number of simultaneous connections per domain).

You probably don’t want to do that because the user experience would bee pretty bad. The CSS is already loaded asynchronously (with other content in the head anyway) so all you would really be doing would be to delay the displaying of the styled version of the page.

If you want to get REALLY fancy what you could do is inline the CSS directly into the HTML for the initial visit and reference external cached files for repeat visits. In practice you would implement it something like this:

  • When the page is visited, if the “css cached” cookie is set just reference the css files normally. If not, put it inline and add the delayed loader code to the page

  • The delayed loader would dynamically create a hidden (or 1x1) iFrame a few seconds after the page has loaded that references a special CSS caching page.

  • The CSS caching page would be a blank HTML page that externally references your CSS files and sets the “css cached” cookie.

It’s a fair bit of work so you’d have to REALLY want those few extra milliseconds though.

Hey there,

Thanks for the response.

From the test I ran, I had two JavaScript files loading asynchronously. The initial connection and the time to the first byte were about the same. One script took 0.4 seconds and the other script took 0.7 seconds to load. Since both scripts were loading at the same time, the time it took for both to load was 0.7 seconds. If both JavaScript files were merged, it would have taken 0.4 seconds longer. It appears to be beneficial to load multiple asynchronous files as opposed to asynchronously loading a merged file when the files are quite large.

With my website, all the features do not need to be loaded on every single webpage. I save 155 KB of uncompressed downloaded (41 KB compressed) material just by leaving out the protoaculous JavaScript library.

I was currently injecting the JavaScript this way:

function importProtoaculousJS()
{
   xmlProtoaculousJS=GetXmlHttpObject();

   if (xmlProtoaculousJS==null){return}

   url="http://www.green-watch.org/javascript/protoaculous.cfm";
   xmlProtoaculousJS.open("GET",url,true);

   xmlProtoaculousJS.onreadystatechange=function() 
   {
	  if (xmlProtoaculousJS.readyState==4 && xmlProtoaculousJS.responseText != '')
	  {		   
	     var headID = document.getElementsByTagName("head")[0];         
	     var cssNode = document.createElement('script');
	     cssNode.type = 'text/javascript';
	     cssNode.innerHTML = xmlProtoaculousJS.responseText;
	     headID.appendChild(cssNode); 
	  }	
   }    

   xmlProtoaculousJS.send(null);
}   

It looks extremely close to the Analytics code you sent except just use src for the node instead of using innerHTML.

Does loading the JavaScript via the Analytics method guarantee the JavaScript libraries will be loaded by the time the body onload event triggers?

That is something I did not know about head elements blocking before the body code starts executing. So it may be beneficial to start loading that asynchronous JavaScript in the body rather than the head so images can download at the same time. I will have to play around with this a bit.

I agree about the user experience being a top priority. Making inline CSS would make style changes a bit more difficult. I am not sure a few milliseconds would be worth all that hassle - maybe for some people though (:

Thanks again for all the suggestion and advice. I love learning about this stuff and implementing ways to make my site better.

Sincerely,
Travis Walters

Hey There,

I changed my JavaScript importation method to the one you described.

function importZapatecMenuJS()
{
   var headID = document.getElementsByTagName("head")[0];           
   var zpmenuNode = document.createElement('script');
   zpmenuNode.type = 'text/javascript';
   
   zpmenuNode.onload = function() 
   {
      var myMenu = new Zapatec.Menu
      ({
         theme: "/zapatec/zpmenu/themes/barblue.css",
         source: "menu-items"
      });

      var myMenuBar = document.getElementById("menu");
      myMenuBar.style.display = "inline";	  
   };
   
   zpmenuNode.onreadystatechange = function()
   {
      if (zpmenuNode.readyState == 'complete' || zpmenuNode.readyState == 'loaded')
	  {
         var myMenu = new Zapatec.Menu
         ({
            theme: "/zapatec/zpmenu/themes/barblue.css",
            source: "menu-items"
         });

         var myMenuBar = document.getElementById("menu");
         myMenuBar.style.display = "inline";		  
	  }
   };
   
   zpmenuNode.src = 'http://www.activegreen.net/javascript/zpmenu.cfm';
   headID.appendChild(zpmenuNode);		
} 

From the tests I have ran, both this method and the XMLHttpRequest method do not guarantee that the JavaScript library will be loaded when the body onload event triggers. This is not an issue since I can use onload events, but it is something to aware of for future reference. Also when using the XMLHttpRequest method I tried inserted the JavaScript by innerHTML. This only appeared to work with Firefox.

I am happy to say that under Google Webmaster Tools, there has been a 3.5 second decrease on average loading time already. My crawl stats still show a high time to download a page. I think that is because of my robots.txt file blocking directories. I am going to adjust that now.

There is still a lot of inline JavaScript causing issues. I am working to resolve those issues now as well.

More updates to come.

Sincerely,
Travis Walters

Great.

btw, another way to avoid the race condition would be to put a function call at the bottom of each of the imported javascript files (potentially variable-based). For example, at the bottom of the prototype file put a call in for “prototypeloaded()” if you need to execute some code as soon as it loads. Functionally equivalent to the onLoad method you’re using.

Hey there,

For those using asynchronous JavaScript, I modified I function I found on the internet that allows you to add events to the body onload event. It takes whatever onload event you had initially and adds another event to it so both the new event and the other body events get loaded. It also takes into consideration if you add an event and the onload event has already been triggered, it executes the event sent to the function.

var bodyHasLoaded = 0;

function addLoadEvent(func) 
{
   if (bodyHasLoaded==1)
   {
      func();
   }
   else
   {
      var oldonload = window.onload;
   
      if (typeof window.onload != 'function') 
      {
         window.onload = func;
      } 
      else 
      {
         window.onload = function() 
	     {
            if (oldonload) 
		    {
               oldonload();
            }
		 
            func();
         }
      }
   }
}

If anyone uses this, make sure you set bodyHasLoaded=1 when the body onload event triggers.

On another note, I do have a question. How do you enable keep-alives for coldfusion .cfm pages? The test results on this website always show I have keep-alives enabled for everything but .cfm pages.

Sincerely,
Travis Walters