Time to send request

Given the huge number of changes and improvements to webpagetest I was wondering whether it is possible to capture the time to send a request now?

Currently the time to first byte also contains the time to send request and I for one would find it interesting and useful to be able to see the time to send the request.


Outside of POST requests, the time to send a request is not easily measured from the client side. Most requests are well under 32KB and the request gets sent down to the buffers in the networking stack. In most cases the full request can even be sent on the wire immediately. You could possibly try to infer when the server got it by looking at the ACKs but with delayed ack’s it could be as much as 200ms off.

For most cases I usually use the TCP socket connect time as the RTT to the server since it’s a pretty clean round trip and it could be a good gusstimate for the send time (well, actually, cut in half) but it is still really just an estimate.

I’m open to any ideas on the subject - I just can’t think of a good way to really measure it from the client side alone.



Do you have any idea how HTTPWatch is measuring time to send?

Do most modern implementations of TCP enable delayed ACKs by default? Can you not disable delayed ACKs for the connections made to the web servers?


Yes, delayed ACK has been the default in TCP stacks for years and you don’t actually have control over it (much less control over the server’s behavior from the client).

I’m not sure what HTTPWatch is measuring but I can pretty much guarantee that it’s not accurate. It could be measuring the time until the send buffer is empty (which would be after the last ACK) but that has the delayed-ACK problem as well as an additional 1/2 RTT.

Would we be able to estimate it and have an indication as to the degree of error likely in the result?

Are we able to measure when the last packet for the request was sent and then add on the 1/2 RTT amount to this measurement to give an approximation as to request send time?

I’m guessing that the send buffer would be emptied when the last ack is received, so if we kept track of the the time each packet was sent and when the send buffer was emptied…or the delayed ack was received…we would have the time the last packet was sent and could extremely unscientifically add on 1/2 the RTT and then be able to have a figure for time to send (albeit with plenty of caveats).

Is this too much of a kludge? :slight_smile:

Yeah, too much of a kludge for me to be comfortable with. The server can technically start working as soon as the first byte is sent (and there is certainly enough data usually in the first round trip).

Is there something about the send time that you need? I usually just use the socket connect time as a proxy for the RTT and then use that as the floor for the expected TTFB for a request. Anything longer than that is usually back-end delays.

Mostly its curiosity and the faint suspicion that sometimes delays are unfairly attributed to backend performance. Its a section of timing that is difficult to quantify and therefore there may well be dragons in the deep that we are missing.

For example a number of clients using front end applications to examine the request before it makes to the the backend server doing the work. This is for security reasons and to prevent know attack patterns. This processing must have an overhead but at present it is soaked up in the TTFB metric.

I have no burning need other than to scratch the itch. :slight_smile:

You can always grab the tcpdumps and look, but the front-end appliances are still going to be considered part of the “back end time” from the browser perspective :slight_smile:

They are considered ‘back end time’ from the browser perspective and ‘front end time’ from the server perspective in my experience. Poor, homeless, unwanted quanta of information…

It can be challenging to get packet sniffers allowed on a production corporate network but it looks like tcpdump here I come :slight_smile:

Can you get direct web access to one of the servers behind the load balancer? If so you can grab tcpdumps from WebPagetest for both load balanced and direct to the server (maybe ACL the tester IP if that would help). You can also completely disable traffic shaping by using a custom profile with 0’s for bandwidth and latency (best to use a test agent a bit further away so it’s not crazy fast though).

now that you’ve reminded me about tcpdump in webpagetest I’ll be using that. :slight_smile:

I should be able to find a direct route around the F5s with a bit of bartering.

I should have know you already had the bases covered.

discount zithromax z-pak online

zithromax buy in berkeley ca

order azithromycin online - klamydia