Chaps, I’m pretty sure I want to start some experiments with increased inticwnd size (plus setting tcp_slow_start_after_idle to 1). (Majority of json responses are 2 roundtrip responses and I rather expect them to be requested less frequently than the 3 seconds slow start idle default.)
What I’m not sure I understand how setting this at the web server is going to manifest through a loadbalancer (F5) and then via CDN.
For instance, I rather assume that I will want to set the F5 to the same settings, but I’m not sure I really understand what is going on. Maybe I can forget setting it at web server and only set it at F5?
So, any primer which might help me out with some expected behaviours before I start grinding through cap traces would be very, very much appreciated…
http://www.cdnplanet.com/blog/tune-tcp-initcwnd-for-optimum-performance/ (scroll down to “Content Delivery Networks and big sites” section)
Basically, if u tune your webserver, you only tune the transaction between the CDN/F5 to your webserver.
To properly improve the user experience, one would need to tune these settings on the first server the user would hit directly, which would be the loadbalancer or CDN.
EDIT: I don’t know exactly how F5 works, but if its an HTTP level proxy (or even tcp level) my answer is correct. If they are even lower level and act just like a switch, then tuning your webserver only may help… but that’s unlikely.
kinda what I thought.
And, as I know there is infinitesimal letency F5 → w/server.
And, as I believe that CDN → F5 latency is going to be less that than browser → CDN, then I’m not going to see any benefit, I fear.
Most CDN’s and load balancers also already do some level of init-cwnd blasting (they usually call it TCP acceleration).
Is your JSON also served through the CDN? The TCP 3 second slow start default mostly kicks in only during socket connect. After that the TCP stack has an estimate for the RTT and will use that for the basis of it’s algorithms (though the actual RTT doesn’t really matter for slow start, it’s more for retransmit times).
How big are your json responses? Just checking to see if they are something that should fit within a 10-packet cwnd or not. Odds are it will be difficult to convince a CDN to change their settings (F5 you might be able to since it is running on your premises).
thanks as always.
Static and some dynamic is served through CDN.
Some dynamic is served direct.
The most requested dynamic response served direct is ~11k.
The most requested dynamic response served via CDN is ~10k.
I suppose my question was simply: is each connection in the chain discrete with its own behaviours - or is there any magic passthrough of connection info that I need to read up on?
And the answer seems that each connection is indeed discrete.
Would you mind elucidating about the 3 seconds? I had though that this was simply the timeout whereafter a connection’s congestion window was reset…
Yes, each connection is completely independent of any other. Sorry, sooooo many timers involved. Yes, 3 seconds of idle time tends to reset the window (though that depends a lot on the OS and implementation).
Sam Saffron had a pretty good demonstration here: http://samsaffron.com/archive/2012/03/01/why-upgrading-your-linux-kernel-will-make-your-customers-much-happier