keepalive question

Hello,

Our test results showed that keep-alive was not enabled:
http://www.webpagetest.org/result/140220_8R_1b5d6f0d4b97bb2c9a5bcdbd32206a93/

However, on our nginx server, keepalive IS enabled. If you inspect the headers on WPT, it actually does say:

GET /skin/frontend/responsive/default/css/styles.css HTTP/1.1
Host: skin.myspicesage.com
Connection: keep-alive
Accept: text/css,/;q=0.1

I don’t think it’s an nginx setting as I have the same settings on a different machine and I’m getting all As:
http://www.webpagetest.org/result/140208_J3_8bde43a07f0135fdeb92ab9b0d72e756/

Can anyone help? Thanks in advance

I also see keep-alive headers, but I did not dig to deep into them. Can you post what your keep-alive values are? Are you using apache >

Timeout ?
MaxKeepAliveRequests ?

You’re looking at the request header while you should be looking at the response header. You’re right in that the request header says “Connection: keep-alive”, i.e. the browser would like to keep the connection alive, but the response header shows that the server refuses this with “Connection: close”. You should probably take another look at your nginx configuration.

I’m using nginx. I’ll post the settings in another reply. Thanks.
[hr]
Robzilla, bastester, thanks.

Robzilla, I see your point about the response and I see the difference between the two tests. What’s strange is that the same nginx configuration below yielded an A and an F on the other. Both systems use varnish-nginx (same settings). Is it possible that it’s on the load balancer settings? (The A does not have any load balancer, the F has).

Thank you.

nginx settings:
user nginx;
worker_processes 4;

error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
worker_connections 1024;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';

access_log  /var/log/nginx/access.log  main;

sendfile        on;
    autoindex off;
map $scheme $fastcgi_https { ## Detect when HTTPS is used
    default off;
    https on;
}

keepalive_timeout  10;

gzip  on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types      text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;

# Load config files from the /etc/nginx/conf.d directory
# The default server is in conf.d/default.conf

specific configuration files snipped.

Yes, your users talk to your load balancer listening on port 80/443, which in turn talks to nginx via a connection separate from the first. Because the load balancer manages all connections with the public, nginx’s keepalive setting never enters the picture.

In fact, in a nginx+Varnish set-up, I would expect that Varnish handles keepalives. Whichever application faces the public.

Just rounding out this thread - it was a load balancer setting that has since been resolved. Thanks for those who posted/helped.