Better Websites Through Deconstruction

Look around the web for how to make your website better and you will find no end of articles, many contradictory. Look more specifically to Wordpress and things don’t change much. What does change is the strategies - mostly around caching and getting around terrible plug-ins and themes which seem to avoid performance mindful markup layout like a plague to be fled from in terror.

What you don’t see is a challenge to the fundamental way WP, and to a similar extent most publishing platforms, handles the basic web page. But what if we threw away the modern web page concept? Are there benefits to be had? Is there a way of making web pages which conserves bandwidth, CPU cycles, DB queries, and latency? I think so. And the answer may be a bit surprising.

A Tasty Analogy

In food, deconstruction is a technique use to replicate the essence of a dish while it not being the same dish. For example you might deconstruct a deviled egg and have various pieces on the plate that, when combined, reproduce the flavor of a deviled egg. In order to deconstruct a dish you must learn what the essence of the dish is. What if we apply this same concept to web pages. 

Let us treat a web page as a dish and deconstruct it, reduce it to it’s essences. Consider this very blog page. What are the essential ingredients of this page, even this site?

First, we have the “header”. The area at the top which essentially tells you where you are. Often, an din this case, the header will contain a navigation bar or menu of links. Off to the right we have various components such as a search space and several “widgets” which contain mostly static and common elements for the site: a set of links tag listing, a tag cloud, an archive drop down, etc..

Then we have the main body, this text, followed by a footer which again is the same across the site. That we have such a layout is not surprising it is a pattern shared across nearly any site. But how do we generate this page? The data driving each section is pulled form a database and merged with the templates to produce what you see. The end result is a dish, er page, served up to you.

One place we diverge from cooking is that this page is rendered and served up by the same process for the entire plate. Not even fast food places do this with their food. At a restaurant each component on the plate is prepared and placed by someone else, then brought to you by a waiter. Think of the waiter as your browser, and the web server is doing all of the rest. But what happens when we create the page the way a chef creates a dish?

A Bit of Internet History

Way back in ye olde days of the Internet, we did things a bit differently for a short time. Some refer to those days and ways as a veritable dark ages. Let us instead call them The Age of The Frameset.

Ah yes, my fellow old-hands are now rolling their eyes and/or wondering why they forgot about this age. For the newcomers among you, I’ll explain what a frameset was and why we used them.

A frameset was a very small page which divided the screen into frames - like panes of a window. Each frame had a different web page as it’s source. It was common in these setups to use a header frame, navigation frame off to one side), a main content frame, and often a footer frame. Sound familiar? Yes, this is the ancestry of today’s template page systems.

Through clever naming of frames in your frameset and link targeting, you could load the rarely-if-ever changing header and footer once and once only whereas main content links changed the main content frame, and occasionally changing site contexts would change the navigation frame. The idea was simple and somewhat elegant, if limited. A much better experience for the user who was often on dial-up access (for the kids: slower than your phone without WiFi or 3-4G). It was an era where even a kilobyte mattered, and mattered a lot.

Now we have the era of faster systems and better bandwidth as well as browser caching. Browser caching, when used properly, is extra sweet goodness because it can avoid useless re-pulls of the same content. But now we are hitting a different bottleneck: server load time.

Today’s Bottleneck: The Server

Today we use site themes, caching, and javascript to essentially simulate ye olde frameset. We even have iframes. But we still generate this data serially on the server. We still generate the links from the DB, select the components and then build them into a complete page before sending the data to the browser. What I am asking is: do we have to?

By doing this serially we can’t really take advantage of modern computing hardware.  Modern servers are multi-core. Web pages are still essentially single core. Sure we have a DB server and a web server, but the web server waits for the Db server. For each dynamic component in the template.

Hopefully, some of you have put these two ideas together and anticipate the direction I am heading with this. And you’re probably close. It is now time to deconstruct the page.

Deconstructing The Web Page

Consider for a moment the idea of this page as a frameset. We would have the header and footer frames, the main content frame, a frame to the right containing probably nested frames with one for each widget. With that we would be able to grab the frameset very quickly, then load each as they were available. Now instead of one request for the body which would wait for each widget to query the DB, turn the results into HTML and javascript, then combine them all into a big page each can happen independently - or asynchronously if you will. In this end we have deconstructed the page and produced a situation known as “embarrassingly parallel”.

Despite the moniker, this is a good place to be. It means we can take advantage of multiple requests, multiple cores, and let that which is fast hit the browser for the viewer as fast as is possible. This leads to more benefits I wish to elucidate prior to getting into the details of how to do it.

Benefit: Isolation For Testing

By isolating each part the faster parts will load faster. While this sounds obvious, and should be, it has deep implications. The first we encounter is the ability to visually identify the poor performing pieces. This will happen naturally as you’ll see what takes the longest to load.

However, once we grok this effect, we realize we can now performance test each component specifically and directly. This is of tremendous benefit. Maybe it is your tag cloud generation which holds up the page. By pulling the HTML for the cloud generation (or data if doing it via AJAX calls) you can pull and profile that specific piece and find out. Then you can make changes, hopefully optimizations, to that piece and improve the overall response.

Even better, you can now generate a performance profile test suite which pulls each component of the page individually resulting in the ability to performance QA your site (or it’s codebase) on a more fine grained and repeatable fashion. It can even be automated. Rather than benchmarking of the entire page/site, benchmark the critical and individual components which produce the site. 

Benefit: Isolating Upgrades

When components of the page are loaded individually, they can be isolated for individual upgrades or changes. Say you want to change the way your tag cloud is generated. You change the code which handles that call as a URL and move on to testing then deployment. This is actually different from Wordpress’ plugins. In the plug-in system each plugin has to be evaluated and changed at the page level, meaning changes can easily affect changes in other parts of the page as well as the entire page’s performance.

Benefit: Per-page Component Changes

Another benefit to  the this model is the ability for the components to be customized per page.  Take a tag cloud component as an example. By turning it into a callable URL you can have it take parameters. These parameters can then change the display or scope of the tag cloud. Perhaps you want it to display only tags used in the category being displayed when viewing a category index page. By making the component a callable URL you get this relatively easily. This could also be easily used for navigational components to customize the widget’s navigation choices presented.

Benefit: Component Scalability

Now a most excellent benefit: scaling at a component level. Nothing in this model mandates the callable URL has to be to the same servers as the original. This means you could have higher compute based widgets on a server built for processing speed and ones which require high memory offloaded to memory intensive servers. This leads us into what may be from the user’s point of view (and your resource usage) the largest benefit: data locality.

Benefit: Cache Tuning 

In many websites deconstructing them reveals the vast majority (around 95%+) of the page to be entirely cacheable on it’s own. The problem lies in that remaining 5-10%. As a result, we wind up caching the entire page for very short periods of time and/or relying on Db caches or object caches, then a complicated mechanism to figure out how often the entire page needs regenerated or even worse - periodically purging the entire cache.

By breaking the components of a page back into callable URLs you can isolate and customize the cacheing controls for nearly all of the page. Consider this page. Of the components of this page, only the tag cloud is truly dynamic. In order to achieve better performance I have left out of the site many widgets and plugins which would essentially invalidate the caching of the page.

If each widget and component were callable URLs I could set caching appropriately. A widget showing latest posts could be cached with a short TTL, the tag cloud to a longer TTL, the header and footer to a very long TTL, the article itself to a very long TTL, and so on. Thus, when switching between articles, the only thing your browser would be pulling down would be the main content - and it would likely be cached somewhere along the path.

That is a benefit for your reader as well as your hosting provider, the series of tubes, and your pocketbook. Think of it like a CDN extended beyond media files. The difference being the addition much of the textual portions of your web pages are also cacheable.

One final note on caching components. Since each has it’s own TTL, each component can individually be purged. This extends the life of the other cacheable components in that why invalidate an entire page because one component making up less than 5% of the page needs refreshed? Because we are still using a system designed for smaller scale than we are operating at.

The Web Page In The Cloud

Essentially the process I am describing is the same process we use at the software level to make software “cloud-ready”. We make it embarrassingly parallel and in so doing make it scalable, cacheable, testable, and more robust. One could argue I am proposing we turn web pages into web applications, and that isn’t a stretch. Essentially, is not a blog or other web publishing system essentially an application?

One might also call this model “web widgets as a service”. Yet this, too, harkens back to the Internet of old. Once upon a time in an electron field long ago you would use CGI scripts which were not housed with your web server. I think some are probably still around. But the basic model is still viable, just in a different way.

We see it creeping in a little bit with comment systems. Consider Disqus, which is essentially taking the comments portion of a blog and moving it off as a separate service. While we don’t have to go to that extreme, we can utilize the idea internally to the page.

The Catch: How To Do It

So, am I end the end proposing we return to the land of framesets and frames? No, not at all. For all the good they did, frames have their own issues. They don’t flow into a cohesive page as a DOM element. But if we merge the old concept with newer advances in web development I believe we can forge a new system which combines the best of both worlds. We can achieve the components (frames) idea and benefits without the benefits of the modern Document Object Model.

Now is where the rubber meets the road, or where it will. I’m working on a back end to prove the concept as part of a “publishing system”. Then I will throw in the front side and call it someting along the lines of a cloud blog platform or similar. The idea is to take the above verbiage and make a scalable platform which tosses out the single-server notion and replaces it with more of a  UNIX mentality.

But this article is already long enough, so that will be a follow-on post. Hopefully in the next couple of weeks. Therein I will explain more from a functional level how I think this can be achieved and with existing infrastructure. In the meantime, reread this from time to time to see what ideas it sparks in your mind, how you can alter your code and page design to be more embarrassingly parallel and less serial.