Please help me with my time to first byte

My website is www.lane247.com. Please check the speed result log.
http://www.webpagetest.org/result/140701_13_MD8/1/details/

It takes around 25 sec. to load. Time to first byte itself is around 15 sec.

I have given it a lot of search but to no avail. Please help me in reducing the load time. Will be really thankful.

What kind of hosting are you on? It looks like the classic case of really slow shared hosting but it could also be an issue with the site code itself (either the template or a plugin).

We are using shared server hosting by justhost.com. We had also tried VPS earlier. Although the overall loading time was reduced to 15 sec, time to first byte continued to be high at 10 sec.

That’s not even close to the result I got, running the WPT on same URL. First Byte was actually pretty decent at the time of the test. (1.576 sec.)

http://www.webpagetest.org/result/140704_6T_QJ6/

This shows you loading nearly 5MB (That’s MEGAbytes) on browsers.

Show me a fat site and I’ll show you a slow site, each and every time. It took 55 seconds to load after first byte.

The flash stuff is 65% of your pageload there.

Anton, use your eyes for once. The site you’re looking at is obviously nothing like the original link.

If nothing else, the title

Hacked by people_hurt

and the headers

<meta content=“Hacked by people_hurt name=” description’=“”>

really are a bit of a giveaway.

Lane247, I can configure you a VPS that will work better. Anton’s repeated comments about fat sites completely ignore the fact that the TTFB is unaffected by the rest of the content, and is a tiny 24kb ( remember compression will shrink that by about a further order of magnitude ) in size.

You do have a few too many files… there’s an overhead associated with every one. Halving the number of files ( there’s a concept known as sprites ) will probably knock 5 or so seconds off the load time, especially for clients further away ( server is reported to be in Utah ).

However, any work I do on a VPS will primarily address the 14 second TTFB. Although pretty irrelevant ( I can deduce nothing of the infrastructire from the graph: mine’s a drupal CMS on one of my own VPSes ), here’s my site stats.

http://www.webpagetest.org/result/140705_29_3A4/

+1 - the overall size and TTFB have nothing to do with each other (there may be correlation in aggregate but not causation). The Daily Mail is one of my favorite examples where it’s a HUGE site (9+ MB) but it’s actually VERY fast considering (180ms TTFB and a ~4000 Speed Index): http://www.webpagetest.org/result/140705_HG_4ac5a84ab3eba2c2c37fcad3d761606b/

Lane247, unfortunately unless you have development or server tuning experience in house it’s probably going to cost you to get it fixed. You can ether throw money at much better hosting (SSD-based VPS) and that may fix it (if it isn’t an egregious code problem) or someone who knows what they are doing is going to have to take a look at the server and the code.

From the outside we don’t really get any insight into what is going into the TTFB. On Shared hosting it’s a LOT harder to diagnose. With a VPS you can install something like New Relic which will pretty quickly tell you where the time is going (and someone will still have to go in and actually fix it).

[quote=“GreenGecko, post:5, topic:8816”]
[size=xx-small]Anton, use your eyes for once. The site you’re looking at is obviously nothing like the original link.

If nothing else, the title

Hacked by people_hurt

and the headers

<meta content=“Hacked by people_hurt name=” description’=“”>

really are a bit of a giveaway.[/size]
[/quote]I never visited the site itself, I only ran the test using IE 10 out of Dulles and this is what that particular browser apparently redirected to. P Meenan might need to have a look at that browser and that computer.[quote][size=xx-small]Anton’s repeated comments about fat sites completely ignore the fact that the TTFB is unaffected by the rest of the content[/size][/quote][quote][size=xx-small]+1 - the overall size and TTFB have nothing to do with each other (there may be correlation in aggregate but not causation).[/size][/quote]Guys - time and time again I have optimized sites that were fat, and after greatly reducing the footprint the TTFB ALWAYS improves. I don’t pretend to explain it, or try to - I simply know it happens, every time.

Reducing the footprint should be step 1 of optimizing a fat site.

[quote][size=xx-small] The Daily Mail is one of my favorite examples where it’s a HUGE site (9+ MB) but it’s actually VERY fast considering [/size][/quote]You are holding up for example, a really fat site that isn’t even fully optimized, that is SLOW loading as hell, 23 seconds on your IE test, which also shows 11+MB in IE - as a real world example of what hobbyists and small sites might do? You KNOW the Daily Mail has massive hosting normal people aren’t going to have. Most of the people who come here for help are on cheap shared hosting Pal. You’re holding up a monster machine saying “look here guys the TTFB is good on this big one” when it has nothing at all to do with most of what we see here, for people needing help.

If you think my saying “try reducing your site’s footprint” to people needing help with their fat site optimization is bad advice, please let me know.

@Anton, as so many people are repeatedly telling you… TTFB is not affected by the content.

It is the time that the server takes to generate and start to return the HTML skeleton. It is affected by things like server + infrastructure performance and tuning, server side code quality: things that a competent admin /DBA / developer with the relevant tools can address on a (virtual or real ) server.

As this html framework is usually pretty small, and compression of an ascii file is usually of the order of 10x makes it even smaller, content download is pretty irrelevant.

Keeping to a decent sized page is a good idea, but with even me out here in the wops having the ability to download content at comfortably over 1MB/sec, it’s much less of a problem than a 10 second TTFB.

The browser downloads static content. It may not even be from the webserver, so how can this affect TTFB? The browser has just delivered a load of URLs, no other content. I suppose shortening them as much as possible might shave a couple of milliseconds of the download time, which isn’t even technically in the TTFB - except for the time spent compressing html content on the fly if necessary.

The whole point of these waterfall diagrams are to show you where the delay lies. Please explain how a download of a large file from a remote site can, in any way affect the TTFB that’s already happened.

You repeatedly make this claim that this is not the case, but nowhere have you shown a shred of evidence. Why not?

I’m sure Patrick will agree with me that numbers aren’t everything, and his tests are in a permanent state of develpment: whilst getting straight A’s across the scoreboard is nice to see, it doesn’t necessarily reflect the true view of the site as seen by the visitor. It is an (extremely useful!) tool to use as a part of improving website performance.