Another random request/thought
When looking at a client site (using a locally hosted version of webpagetest) we noticed some strange artefacts in the results.
For certain items we were getting large content download times. These download times were inconsistent with the size of the object being downloaded (e.g. a 2K file taking approximately 2 seconds to download over 1.5 Mbps ADSL)
One of the things that we were playing with on the network equipment was disabling of the Nagle algorithm. This set me wondering whether I was seeing an effect caused by this network parameter change.
My hypothesis is that the object is being sent in a large number of rather empty packets rather than being buffered (by the Nagle algorithm) and sent on efficiently packed packets. Since we are using the packet shaping feature I was curious as to whether the packets were received serially and the artificial delay added was exaggerating the time to download.
It would be useful to be able to know the packet count received for each object so that the impact of enabling/disabling the Nagle algorithm (a valid WPO technique) could be examined without resorting to a packet sniffer.
I did check the raw data first this time and couldn’t find a relevant column. 
Cal
You’re well into tcpdump territory
I keep track of packets transmitted at the overall page level but don’t have visibility into the per-connection packet flows.
FWIW, I’m not aware of a single web server that doesn’t disable Nagle for performance reasons. Usually the buffering will be done at the application layer to avoid really small writes. If you enable it there’s a good chance that the end of your response will be delayed by 200ms while it waits for a full packet.
I would be happy with the count of packets at the page level. Where do i find that info?
The device we were enabling/disabling the Nagle algorithm on was an F5 BIG-IP load balancer. Nagle is enabled by default on these devices (at least on the software version we have). And you don’t want to know how ancient the software stack behind the F5 is. …ancient versions of SunOne webserver (though admittedly a version from after the name change from iPlanet…just) and WebLogic application server.
One of the software components in the stack behind the load balancer has a history of sending highly chunked http traffic under certain conditions (I have observed this at another time and place) and it may well be that the load balancer has been buffering this behaviour. By disabling the Nagle algorithm we may well have worsened performance for some components and not improved it bizarrely…but I don’t know (yet) if this extension for content download is an artefact of the artificial delay being added by the traffic shaper for WebPageTest. More testing is slated for later on this week.
On the note of per connection packet flows it would be interesting to be able to look at that level of detail because another benefit would be checking to see whether the first response fits within the initial TCP window, which can be important for things such as ensuring efficient SSL handshakes, or if you are using flushing that the initial chunk is the right size, by reducing the RTTs involved.
Unfortunatly it’s only the outbound packet counts (from the browser) that I keep track of in the raw page data 
You’ll be a lot better served by enabling the tcpdump capture option (in advanced settings) and then analyzing the raw packet captures. You can run all sorts of automated analysis on them and isolate the flows of interest.
I had forgotten that tcpdump was part of the feature set! doh! Awesome!
Is anything special needed to get tcpdump working on a private windows installation of the webpagetest agent? Or if everything else seems to be working then tcpdump should to?
tcpdump should work fine on private instances as well - just make sure the winpcap installer is in the agent directory (or installed).
alternatively you can try using wireshark
including manual book that easy to understand
