@hsiboy.
I’ve run another benchmark, with native speed, and chrome.
http://www.webpagetest.org/result/130702_RJ_2Q4/
I see an initial TTFB of 115ms, down from 178ms. TBH, seeing as this is a multilingual site with tens of thousands of products, I’m pretty happy with this. Whether the cache performance could be improved is another question, but I’m not in the slightest worried by these results.
The static server has 2GB spare for cache usage, and network interrupts peak at c. 750/sec, connections peak at c. 1,750, CPU ( primarily usermode ) at c. 25% - there is a staging server on here too…
I’ve updated the nginx config on the static site, upping worker_rlimit_nofile to 100k, and increasing the open_file_cache to 20k, but it doesn’t seem to have made an appreciable difference.
nginx config for static is
location / {
if ($request_filename ~ "\.(js|jpe?g|css|docx?|gif|png|txt|pdf|swf|ico|mp3|woff)$") {
expires 30d;
break;
}
return 404;
}
I sort of can’t really pare it down much further.
@Patrick,
Server platform is multiple VPSes, nginx talking to mainly remote ( LAN ) php servers. I’ve got to the stage where - once up and running - there is practically zero disk IO - InnoDB buffer and query cache handle 99%+ of DB IO ( which is almost all read as you’d expect ), php servers use c. 50% of available memory even with 1GB APC segments, sessions and cache are managed through redis data stores. Artur’s presentation was interesting, but I still cringe when people throw away all of those decades of development with a new toy… combine the two, and surely you’ll gain even more?? ( but yes, in my case, everything fits into memory ).
So I’ve sort of done exactly the opposite of what you’re suggesting, due to the fact that high performance IO is what can’t be expected: so keep it all in memory. Same sort of result, just a bit more fragile…
I was a bit worried about bandwidth - my 5 min averages are showing 50+Mbit/s - but apparently they are now rated at 250Mbit/s, so I probably shouldn’t (: