How to solve "0 ms (Request Canceled)" problem?

Hi,

Who can tell me what does it mean? Test results are here.

It happens only when testing on IE8 and it drives me crazy. I want my results to be under 0.5s – what can I tell from graph is that I can gain another 0.02s from this 0 ms (Request Canceled), if it is possible to fix it somehow.

Also, optimization is still not over. Will reduce HTTP requests by about 8 hits (at least I hope so), and will add some additional subdomains for static images. Hard way for under 0.5s goal :slight_smile:

On a side note – webpagetest is a very nice and handy site for web optimization. FTW!


Thanks in advance[hr]
Here is an another test 0.498 \o/ but still that annoying 0 ms (Request Canceled) problem exists.

I tried to use wireshark and IE8 on my development machine, and I do not see any such requests or whatever it is.

Help :huh:

I’m taking a look to see if I can figure out what may be causing it. I can’t reproduce it on my dev machine either but there may be something timing related that is triggering it.

A “request canceled” happens when the browser starts a request but closes it before it completes (in this case it looks like it is closing it really quickly and then re-issuing the same request). It may not even make it far enough to hit the wire.

Shame I haven’t implemented the code to do packet captures as an option from test runs (started the code but got distracted by other efforts - will happen at some point though). I’ll look some more and see if I can track it down for you. The systems are getting a little crushed today with the Google announcement (performance being part of pagerank now) so I may not be able to make much progress until things calm down or I can get another system to do some remote debugging on.

Thanks,

-Pat

Thanks for looking into this.

While 0ms issue remains, here we go 0.461s \o/

Sorry, as suspected, the traffic shaping wasn’t working so that was for the native FIOS speed. Here is the result on the DSL speed: http://www.webpagetest.org/result/100416_7E3F/

Still VERY good, particularly given that it includes graphics and visual elements.

Did some optimization – namely, CSS sprites. Results (via Dulles, VA USA):
[list]
[]IE7 ADSL: 1.506s
[
]IE8 ADSL: 1.643s
[*]Dream about IE7 FIOS: 0.340s :slight_smile:
[/list]
What I have noticed is that while using the same location and ADSL, IE8 is a little bit slower than IE7. Is it the same physical machine, or 2 different? [size=xx-small]Based on +/- 10 different test runs[/size]

Now on topic (0ms issue) – I’m not alone. Google has the same issue. Most likely some kind of bug exists in your test environment.

Additional issue / question about Repeat View: are you sure you correctly give status FAILED for Cache Static on html files with 304 response? Results here: Google and me. If you are correct, what is your suggestion on how to fix this? Note that both results on first view return Cache-Control: max-age=0, private in response headers.

Huh, wall of text :slight_smile: Thanks in advance for your answers.

IE7 and iE8 are on different physical boxes though with identical specs (IE7 actually spans multiple boxes).

I haven’t had a chance to look yet but the Request Canceled issue might be an IE8 behavior and an artifact of me logging requests that were set up but not actually initiated. It’s odd that I only see it under some circumstances though so I would like to understand it better.

Looks like it’s probably worth revisiting the caching to allow 304’s for max-age=0/private responses but only for the base page. The css in Google’s case should have been cached. I’m working on a release right now, I’ll see if I can get the tweak added to the logic and rolled out in the next few days.

Thanks,

-Pat

Huh. I somehow doubt that IE7 is faster than IE8. But maybe in specific cases it is so. Who knows… Or, most likely, boxes are different, wires, switches, load balancing?, drivers, etc.

Soon™ :slight_smile:

Agree. Only base page.

Did some more optimization – namely, javascript defer / postload. Results (via Dulles, VA USA):
[list]
[]IE7 ADSL: 1.170s
[
]IE8 ADSL: 1.227s
[/list]
Again, IE8 is slower that IE7. hmmm…

0ms issue still exists, but there is only 1 such bad request instead of 2. Most likely because of defered javascript.

Nice to see that Repeat View issue has been fixed. Good work! [color=#32CD32]Green[/color] grid is good grid :slight_smile:

Also, what is the theoretical best possible page load speed (on ADSL), if document size is 118KB, and there are 8 requests in total (DNS + CSS + etc.)? Under 1s almost impossible? For First View it is possible to inject CSS into base file, and then postload all CSS?

The fastest that 118KB can get delivered (theoretically) is:

50ms DNS lookup (assuming it had a long TTL and was cached at the edge)
+50ms socket connect (Assuming the server was co-located at the FIOS head-end)
+50ms for the request (assuming minimal/no cookies)
+700ms to download the 118k at 1.5Mbps (including TCP overhead)

= 850ms

There are some really big caveats that will prevent that from ever happening, but that’s pretty much the floor. The most notable caveat was that I assumed your server was at the other end of the DSL connection and that you were using some form of “TCP acceleration” to bypass slow start.

Realistically, you’re delivering that much content as fast as I’d expect it to be possible.

On the CSS note, yes - what you can do on the server side is inject the css if a specific cookie is not set and call it externally if the cookie is set. Then using javascript on the page, some time after the document has loaded create a 1x1 (or hidden) iFrame that references a dummy page whose purpose is to cache any external files you would like to reference. Have the dummy page set a cookie so that your server code can tell if the external files have been cached or not.

You can also use this technique to pre-cache other css or javascript files that will be needed on your other pages so they will get pre-loaded when someone hits your landing page.

I was able to reduce image sizes further by 14KB. Total size 104KB. Results (via Dulles, VA USA):
[list]
[]IE7 ADSL: 1.095s \o/
[
]IE8 ADSL: 1.330s
[/list]
Again and again – IE8 is slower. Are you sure that both “boxes” are identical? Is it possible that shaping is more aggressive on IE8 box?

Additionally, I have created inline CSS /w post load (after onload event). IE7 disappoints, while IE8 shows some improvements (speed wise), but still not fast enough. Results (via Dulles, VA USA):
[list]
[]IE7 ADSL: 1.228s
[
]IE8 ADSL: 1.256s
[/list]
As you can see, layout.png (50KB) starts downloading as fast as possible (except, it requires to establish a new connection [size=xx-small]2nd request[/size]), but still is not able to finish faster (it remains as last item who triggers document complete event). TCP slow start, or what? Anyway, I think that results are acceptable, so “embed CSS” project will be frozen for some time.

P.S. 0ms issue still in place.
[size=xx-small]P.S.S. All times are based on 10 runs. Best time is taken.[/size]

Yep, I’m positive that both boxes are identical and the traffic shaping is actually done at the router that is shared between the boxes so I’m positive about that as well.

I am experimenting with running the tests in a VM on much faster hardware but in that case IE7 and IE8 would be on the same box. Here are the results from that setup:

IE7 - 1.001s
IE8 - 1.221s

I’d actually throw away the first IE7 run because it took the hit for caching the DNS results. Looks like you broke the 1 second mark on the faster hardware with times ~ 0.983s

I think IE8’s increased number of parallel connections is actually working against it for a site like yours where you have so few requests.

IE7:

IE8:

Looks like the 0ms request has followed your site to the VM’s as well :dodgy:

Thanks for all the info…

Meanwhile, I worked on response headers. Most significant change I have made is that I added Content-Type: text/html; charset=UTF-8 right in response header for all base files. That allowed me to remove from each and every base file (+/- 1,000 pages) head section. Technically, I have saved just a few bytes, but… same total size 104KB. Results (via Dulles, VA USA):
[list]
[]IE7 ADSL: 1.088s \o/
[
]IE8 ADSL: 1.159s wiiii \o/
[/list]
Not sure why (can speculate, that providing browser with charset early allows to render page faster :idea: true/false?), but results are best ever.

I agree, IE8 somehow messes up with many connections. Hope, in IE9 they will figure out how much new connections they have to establish (connection:total_request wise).

[quote=“pmeenan, post:12, topic:157”]
Looks like the 0ms request has followed your site to the VM’s as well :dodgy:[/quote]
gigi :wink:

Currently working in deep pages. Thinking about on how to reduce root even further – down to 99KB :cool:

[size=xx-small]P.S. English is my 3rd language… after re-reading some of my posts… I messed… plurals, tense… blah :blush:[/size]

[quote=“laboot, post:13, topic:157”]
I added Content-Type: text/html; charset=UTF-8 right in response header…[/quote]

Side effects may occur… w3.org does not like this approach.

Page still validates, but… Result: [color=#006400]Passed[/color],[color=#FFA500] 1 warning(s)[/color]

And explanation is stupid… No Character encoding declared at document level. More / full bla bla…

If they say “It is often recommended”, then why this is warning? Stupid semantics – they should add 3rd level of feedback, e.g. Errors / Warnings / Info (aka recommendations).

Thanks w3 for ruining my day!

[size=xx-small]P.S. Till today all my pages (html/css/xml/pad/etc) validated without any single error or warning. Till today…[/size]

Here we go - managed to reduce page size to 91KB. The same content, the same images, just a lot smaller size. Results (via Dulles, VA USA):
[list]
[]IE7 ADSL: 1.023s
[
]IE8 ADSL: 1.119s
[/list]
I paid special attention to network packets (MTU wise), used wireshark to watch closely on how my network packets flow. As a result, all 4 images (via IMG tag) uses packet “payload” at full efficiency – last packet is almost fully loaded (just +/- 100 bytes remains free). For example, both “virtual boxes”.png files uses just 2 network packets each.

No one on entire internet does not speak on this. Googled everything, but info is just some bits there and there. So, to all who hear my words, I say this – “Most common MTU is 1500 (or 1514), which leaves room for actual 1460 bytes. And first packet loses additional +/-380 bytes due response headers.” From this point on, it is easy to create an excel sheet with necessary data… Also, based on file type, response headers will vary by, generally +/- 50 bytes.

:idea: Perhaps, you can implement some stats on how many network packets each request taken? Especially for smaller files (let’s say up to 10-15KB) where every unnecessary packet causes noticeable penalty. Community will appreciate that. Or this is overkill?

Ok, back to work. Work harder :slight_smile:

More about network packets. Let’s take this test run, namely request #8.

Response header (326 bytes + 4 bytes (2 CRs). Total 330 bytes):

[quote]HTTP/1.1 200 OK
Date: Wed, 05 May 2010 06:45:14 GMT
Server: Apache/2.2.15
Last-Modified: Tue, 04 May 2010 09:49:42 GMT
Accept-Ranges: bytes
Content-Length: 2498
Cache-Control: public, max-age=2592000
Expires: Fri, 04 Jun 2010 06:45:14 GMT
Keep-Alive: timeout=5, max=99
Connection: Keep-Alive
Content-Type: image/jpeg[/quote]

As we can see, image is 2498 bytes long. So how it fits into packets? If we want to squeeze it into 2 packets, total image size should be smaller than:
[size=large]payload * packetcount - header[/size]
We translate this to bytes, and get:
[size=large]1460 * 2 - 330 = 2590[/size]
Mathematicians says that 2498 is smaller than 2590. Tadaaa. We win!

Also, to reduce response header size, someone may consider to remove Last-Modified header line (e.g. Last-Modified: Tue, 04 May 2010 09:49:42 GMT). But, after image expires (1 month: Expires: Fri, 04 Jun 2010 06:45:14 GMT), there will be penalty in form of new image download (200), instead of not modified (304). Think.

[size=xx-small]Back to work :)[/size]

You have definitely graduated to the next level of performance optimization :slight_smile: (Just be careful to not over-optimize beyond where you see value in return).

FWIW, if you look in the data table there is a “Bytes In” value that will tell you the number of bytes for the response including the headers (but not including the TCP overhead. You may want to target a 1492 MTU (granted, just 8 bytes different) because DSL (PPPoE specifically) has 8 bytes of overhead on the wire.

Where you’ll really see the benefit if you’re going to start looking at packet-level data is in the base page or the initial request on any new connection. With TCP slow start you can usually get 2 packets out before having to wait for another round trip so for the base page make sure as much of your external references/head makes it into the first 2 packets as possible.

You’ll also want to see if you can flush the document early. Not sure what technology is serving your pages but if you can force the head of the document out before making any database or back-end calls you will speed up the page getting to the users. You’re basically looking to reduce the first byte time on the base page as much as possible (the fastest it can get would be to match the socket connect time - so the green and orange bars would be the same size).

[quote=‘pmeenan’]
… to the next level of performance optimization…[/quote]
\o/

The same size, the same images. All the same. Except, I have paid special attention on how to use connections wisely. And the results are (via Dulles, VA USA):
[list]
[]IE7 ADSL: 0.928s
[
]IE8 ADSL: 0.985s
[/list]
Goal is reached! Done me is with this :slight_smile:

:idea: You have to charge (at least $10 on a monthly basis?) for a premium services, such as connection view, record video / film strip view / number of runs >1, etc. Of course, basic features remains free. No doubt, you have the best tool on the market! All the other stuff I have googled for, is a total crap compared to this tool. :heart:

[size=xx-small]P.S. You know in advance, you have at least one customer (subscriber). Guaranteed![/size]

Also, just curious - could you please run this test on your fast VM machine? Please :slight_smile:

As long as you (or anyone else using it) understands that the “hidden” locations are still experimental or not yet deployed, feel free to use them: http://www.webpagetest.org/test?hidden=1

Right now the VM’s are the only hidden location (last location - Dulles) and they allow for full customizing of the bandwidth (as well as the standard pre-defined profiles). I will sometimes hide an existing location if it is not responding until we get it back online and new locations will be hidden while they get installed so there’s generally no guarantees there :slight_smile:

On the subscription side, my goal has always been to have the site be as close to revenue-neutral as possible. The ads offset the hosting costs and some of the costs for operating the Dulles location and the other locations are being supported by great partners. AOL has essentially been funding the development costs by open sourcing what was an internal tool and allowing me to continue to enhance it (looking to get more corporate branding on the site but historically it wasn’t worth the corporate hurdles). At this point I’d like to see it work to help build up the general community around performance and as Google likes to say “Help make the web a faster place”.

Really nice work on the optimizations btw - you may have a second calling as a performance geek :slight_smile:

More work has been done.

[list]
[] 0.225s - FIOS :heart:
[
] 0.902s - ADSL :-/
[/list]
The problem is that Compress Images score is C. The idea is that 4 images are loaded at the very end (requests 9-12), after Document complete, after javascript, after everything else. Javascript uses some voodoo and replaces low quality images with HD images.

:idea: Perhaps, if images are served from hd subdomain, such as hd.example.com, penalty is not applied? Or issue warning, but do not affect overal image score in this case? Or is there some other way to serve high (best possible) quality images to user without score below A? :huh:

Help! Please.

[size=xx-small]Remember that Google charges for premium services.[/size]