Speed as a Feature, Part 2: Frontend Performance

This is Part 2 in my series of making speed a feature of your application. If you missed Part 1, you can catch up here. Don’t miss the slides that accompanied this talk at the San Diego CTO Forum.

As a community, we have spent a considerable amount of time working on the performance of our applications from the backend. But truth be told, a single request spends the majority of its time on the frontend of the application. Don’t believe me? Let’s take a look at the chart below.

New Relic Request Overview

This chart shows the overall request time from a large eCommerce company. (Shown with their permission, of course.) It has an average total response time of 3.3s. The first two layers, in blue and yellow, are frontend rendering times. The third layer, in brown, is the time spent in the network. And the last layer, shown in purple, is the application server. The app server time includes everything from database calls through template rendering, and only takes an average of 112ms. So, out of the entire 3300ms of the request, only 3.3% is spent within the application server. Granted, this is only the case because the application has been finely tuned, has good caching in place, and the database has proper eager loading and indexes. However, it does show that there is likely opportunity for improvement outside of the application layer. In fact, most consumer web applications spend 60–80% of the time within the frontend layer. And it’s there where we’ll spend our time in this post.

Dependence on External Services
You likely have at least one external service running on your frontend right now, probably more. Between analytics tools, advertising networks and social widgets, your load times are probably being stretched by these systems, of which you have no control. Loading external services can add over 900ms to your page load times. If these services are making synchronous calls, they can block your page load when they respond slowly. Many external services are making their integrations asynchronous but not all, and you have to be sure to upgrade to their async offerings once they do become available.

Keynote took a look at performance data during a Facebook outage in late May. They found that even companies like CNN and Expedia have coupled themselves too tightly to external services. Take a look at the chart below.

Keynote Facebook Outage

You’ll see that as Facebook’s response time (the magenta line) spikes, so do the response times for CNN and Expedia pages that contain a Facebook widget. Contrast this with USA Today (the cyan line,) whose response times remained steady during the outage. USA Today constructed their pages with the proper circuit breakers to prevent the third-party services from interrupting the proper loading of their pages.

Simple Truth About Facebook

If your page is the victim of an external service failure, you’ll likely end up with a white page of death (a page that fails to render entirely.) Studies have shown that users will abandon a page after waiting three seconds for a page to render. Failures from external services can be mitigated by using asynchronous calls to them or applying the circuit-breaker pattern to their integration.

CSS Is Your New Worst Enemy
You may not realize it, but your (ever growing) collection of CSS styles may be dragging down your page load time. Stoyan Stefanov has done excellent research into the effects of CSS on the critical path. He shows that browsers will block rendering (showing the user a white page of death) until all screen CSS has been downloaded. Some browsers, with the exception of Opera and Webkit, even block rendering until all other stylesheets are downloaded as well — even if they aren’t used!

The key is to become friends with your CSS. Make sure you are only loading what is needed, prune and clean your stylesheets regularly, and send your stylesheets in the smallest form possible.

And while it may seem like the easiest solution to just put your print stylesheet in your global header and call it a day, remember you’re pushing those styles to everyone regardless of its usage. If you can, have a separate layout for printed content which loads the print stylesheet. Mobile users are the fastest growing segment of web traffic and their bandwidth is at a premium. You should exclude a print stylesheet when serving out pages to those users, since it is highly unlikely they’ll be printing from their device (sorry AirPrint).

Your CSS likely grows in proportion to your site. When you add new features and pages, you’re probably adding more styles. However, the reverse is unlikely to hold true. If you remove a feature, the styles are likely to hang around. Having unused styles has two impacts. First, you’re transferring unnecessary content to the browser taking up precious network time. Second, and more importantly, the browser’s CSS engine takes each style and searches the HTML document tree until it finds a match. If no match is found, the style is discarded. When your stylesheet is filled with unused selectors, the parser is wasting time looking for matches. Again, this has significant impact on mobile users with devices equipped with limited CPU. Good organization of your stylesheets can help you easily refactor them when features are changed. Tools like Sass, LESS and the Rails asset pipeline (or its equivalent in other languages,) can really help with this organization.

Smaller is Faster
While unused selectors end up wasting CPU cycles, they also waste network time. Once you have cleaned up your stylesheets, you’ll want to make them as small as possible to send to the client. All CSS should be minified and compressed before sending them to the browser. CSS preprocessors, like Sass and Django Compressor, can take the heavy lifting out of this otherwise clunky process. Combine minified stylesheets with server-side compression and you’ll have a lean style delivery machine.

Gzip compression, which can save up to 40% of space, and basically comes for free in Nginx and Apache. To add gzip compression to your Nginx config, simply add the following directives:

[sourcecode language=”css”]
http {

gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";

Notice that we’ve disabled compression for browsers that cannot receive gzip’ed assets, such as IE 6. There are many more settings you can tweak for your server, so be sure to read the full Gzip module documentation. Adding compression to Apache 2 with the Deflate module is just as simple.

[sourcecode language=”css”]
AddOutputFilterByType DEFLATE text/css

The Deflate module is very configurable and powerful, be sure to read the full documentation before implementing it.

It’s All Business in the Front
Frontend performance is often neglected but there are plenty of straightforward improvements that can be made without much effort. If you only do one thing from this article, I’d recommend turning on asset compression from your web server. There is a lot be had with compressed CSS and JavaScript, and very little effort to implement it.

In the final installment of this series, we’ll take a look at the common pitfalls that cause performance issues in the application layer itself. Until then, give New Relic a try and get your data nerd tshirt for free when you deploy.


View posts by .

Interested in writing for New Relic Blog? Send us a pitch!