Abnormal TTFB (Time to First Byte) for static files

I have a dedicated server (i3-540 + 8GB RAM + 1TB HDD in RAID) with CentOS 6.6 + Nginx 1.8.0 (:80) and Apache (:8080).

I run a couple of websites on this server, and TTFB is OK for HTML/JS/CSS. Especially when it comes from cache.

However, when I try to open a 500KB .gif image directly it has very big TTFB values.
It always has ~1.5-1.75s TTFB.

First, I tried to optimize my Nginx server but then opened this image from :8080 Apache’s port, and got same delay. So, the problem is not in Nginx.

My server actually has totally free bandwidth (1 of 100Mbit/s), low I/O wait ratios (like 0.2-0.5 ms) and low average load (0.05-0.2).

Then why I get this huge delay for static file?

P.S.: I actually have much more powerful server in another data center of same hosting provider but do not get such a big TTFB. It has same OS, Nginx/Apache configs.

There could be number of potential reasons:

A) The file could be on a corrupted part of the drive and the server is having to correct the corrupted data on the fly, ergo the longer than normal TTFB.
Solution: Overwrite the file again using FTP or your favorite deployment method (git etc)

B) Is the GIF file dynamically generated (or do you have other dynamically generated GIF files) - with certain .htaccess/apache config rules, you could get the PHP parser to parse other file formats (such as images).
Solution: Check for anything that may look like:
<FilesMatch “.(js)$”>
AddHandler application/x-httpd-php .gif

C) There could be a delay introduced by a router switch inside the datacenter that may be misbehaving.
Solution: Try doing a traceroute/tracert (Linux/Windows respectively) from the command line and see if there are any dropped packages, any abnormally long ping times etc

Those are the only things that I can presently think of that may be causing these issues.

Hi,

Thanks for your reply. Here are my answers:

A) I tried it with many of files with no luck.
B) GIFs are just static files.
C) I made MTR tests and got 1-2% package loss between EU and USA (somewhere on host in Germany). However, my hosting provider said that it can’t cause the problem.

Are all the static files being served through the cache?

The reason I ask is that the cache may be set up to not cache files that are over a certain size, and so your hitting the cache, it’s saying “it’s not here - check origin” and then your browser is fetching the file off the original server.

Best to actually determine the real reason why.

Here’s how.

First, you must test real speed, meaning speed on your server. To do this…

  1. login to your server using ssh

  2. cd to root directory of your site

  3. create an empty file - touch foo.txt

  4. test serving speed of this file via something like this…

net3# ab -k -t 60 -n 10000000 -c 5 http://RealMetalHosting.com/foo.txt | egrep ‘^(Failed|Requests)’
Completed 1000000 requests
Completed 2000000 requests
Finished 2957115 requests
Failed requests: 0
Requests per second: 49285.25 [#/sec] (mean)

This report is from a marginally tuned server.

Notice 1,000,000 visitors launched + run for 60 seconds.

All requests succeeded (Failed requests == 0).

Throughput is a measly 49285.25 reqs/sec which means this server requires some additional tuning.

If this number is low, then start by tuning your OS - TCP + file systems + Apache.

If this number is high, then ensure your ethernet adapter is correctly configured + working using ethtool + some bandwidth tester like nttcp running on two adjacent machines or two ethernet adapters on same machine.

If this number is high, then move to mtr + track down point of slow down.
[hr]
Point of this approach is to make sure your server actually has the potential to serve files fast, before you start debugging other factors.