Of late I’ve seen a lot of talk about “Time To First Byte” (TTFB). In a nutshell, TTFB is supposed to represent the time it takes for a web server to send the first byte of data to the client - usually a web browser. There are a variety of viewpoints regarding it’s usefulness.
First, I’d say it can be useful. However, it also seems to me to be a bit of premature optimization for most. I’ve looked at sites where the owners are complaining about their TTFB and run an analysis on their site. In every case I’ve seen so far none of them have run an actual performance analysis on their site. By that I mean they don’t load up chrome, firefox, or safari with appropriate analysis tools such as Page Speed to find out what they are doing to make their site slow overall.
It doesn’t matter if your server takes 2 seconds to send the first byte of data if it your page layout uses poor HTML, CSS, and JS techniques meaning the rest of that page takes 12 seconds to load and display. In my opinion, unless your regularly TTFB exceeds say 35% of the time it takes to display the page, you are looking in the wrong place first. Run the test linked to above. If you have “High Priority” items, fix those first. On a page that takes 12 seconds to load, TTFB being .2 seconds or 3 seconds is largely irrelevant.
As I was composing this post I ran a few tests through Page Speed on this very site. With several items in the red (i.e. High Priority items to fix), they run an optimized test to see the difference. Making those suggestions, their test results say, would result in an decrease in my TTFB.
There are sites that will test it for you in a variety of browsers and over a variety of link connection types. That is one method, and it is a simple one to do. However, it is a snapshot only. I was looking for something better. I wrote a Python script that concurrently pulls down a set of pages on this site every thirty seconds and stores the results in a Redis instance.
To display that, I then wrote a Flask application to generate a heatmap of the results. I pull three pages. The first, is the home page of this very site. The second is a very short plain HTML page, and the third is a copy of the HTML page renamed to PHP to trigger the PHP parser path in Apache.
I chose these three tests because I wanted to see the difference WordPress makes. I see a lot of complaints about WP, and many blaming WP for low TTFB. It is certainly possible given WP is DB dominated.
[caption id=“attachment_119” align=“alignleft” width=“702”] A Heatmap of TTFB for www.iamtherealbill.com[/caption]
We see some interesting bits in this heatmap. First we can see the top row, the WP part, is ridiculously bad and inconsistent. In this map red happens at 2 seconds. It is interesting and unfortunate that a request of a simple HTML page should take more than 2s, and we see that a few times. Note the red stripes. As this represents the same time for a pull from each URL, we can conclude that at those instants, the server was, shall we say, less than ideal in it’s response.
That is certainly something that needs to be addressed at some point. However, the performance pattern of WordPress in this scenario is atrocious and IMO is something we really should be looking at. Sure, a variety of modules can alter that time, but this site uses very few modules that are not built in, and uses them sparingly as well. Clearly, the WP install - which is up to date and default settings - is lacking. This is on the WP software.
BTW, I’ll be making the Python script to get TTFB into a small library and script, then uploading it to GitHub probably in the next week or so, in case anyone else would find it useful.