Web Performance

Published by Dan Sleeth on

How fast your website loads and performs can have a direct impact on customer engagement and can be the difference between someone sticking around to make a purchase or abandoning and looking elsewhere. Studies from major online retailers have shown the positive impact improving the loading time of your site has on conversions.

Moreover, the explosion in traffic from comparatively low-powered mobile devices on unreliable cellular networks has brought into sharp focus the need to address web performance issues. Somewhat surprisingly, a study by web monitoring company Gomez found that users expect websites to load faster on mobile devices than desktops.

Clearly, we have our work cut out for us if we are to live up to our customers’ expectations. The simplest solution would be to suggest making your sites and pages as light as possible by removing any unneeded code, images and scripts in order to reduce the number of server requests we make. Whilst this is sound advice, in today’s landscape of large amounts of images and JavaScript this isn’t always possible or feasible. To that end, focussing on perceived performance can be just as helpful.

Perceived Performance

Back in the early 90s, research from respected software usability consultant Jakob Nielsen found that users become distracted after 1 second of assumed inactivity. Although the research was discussing desktop apps, the findings are just as relevant to the web. What this research tells us as web designers, developers and producers is that we have no more than a second to keep a user’s focus by ensuring that we begin painting to the screen within this timeframe.

This challenge is greater still when we consider that it can take around 300ms (or longer) just to carry out the initial communication with the server. Taking this into account, we only have in the region of 700ms to begin painting to the screen.

Render-blocking Assets

One way to help break this 1 second barrier is to reduce the number of “render-blocking” scripts your website requests. JavaScript files in particular can be harmful to first paint times as the browser has to stop everything else while it downloads and executes the script. If you have multiple scripts referenced in your documents <head> the impact is compounded.

Therefore, your first step should be to move all script calls to the bottom of the page, just before the closing </body> tag. This allows the browser to completely download and parse your HTML, i.e. your content, before having to tackle any JavaScript and so your content will be rendered on screen more quickly.

CSS is also render blocking but shouldn’t be included at the bottom of the page unless you are happy with your raw HTML being displayed to users only for it to change dramatically once the stylesheet has been downloaded and parsed. This isn’t a particularly pleasant user experience.

So, what can we do if one of the first items the browser needs to download prevents it from doing anything else while it’s doing so? In recent years, a technique has emerged whereby you extract your critical, or “above the fold”, CSS and “inline” it into the <head>. This technique produces a subset of your CSS, just enough to render the content in the visible part of the user’s display. Additionally, by inlining the code, we prevent the browser from having to make another request to the server, negating the associated latency that comes with such a request.

This means the browser is given just enough CSS to render this visible content as intended and we can then dynamically load the remaining CSS after the document has finished downloading. Tools such as Critical can help to automate this process.

Compression and Minification

Your server should be configured to use Gzip to dynamically compress all assets prior to being sent over the network. This reduces bandwidth and download times with just a simple configuration change to your server.

On the client side, CSS and JavaScript should also be compressed through a process known as minification. This technique strips the files of all unnecessary characters, such as spaces and carriage returns, to deliver a smaller payload to the browser. These characters are only used for formatting reasons to make the files more human-readable and as such, are superfluous in a production environment.

JavaScript can also be further compressed using a technique called “obfuscation”. This process takes variables and functions within the JS file and renames them so they use as few characters as possible. This further reduces the file size and also the human-readability of the code.


Images are often the biggest cause of overweight web pages. Your images should also be compressed and optimised appropriately as discussed in our other post, Choosing the Right Image Format. But there are other techniques we can use to optimise images as well.

Responsive Images

We’ve always had a “one size fits all” approach to images; whether the user’s display is 300 pixels wide or 3000, we send the same-sized image over the network and let the browser scale it up or down accordingly. In today’s world of of mobile browsing and expensive cellular data, that just isn’t practical any more. We shouldn’t be sending 2000px wide “hero” banners to devices with small screens or poor connections or both.

Unfortunately, the information we need to deliver right-sized images to the user – screen size, pixel density, connection speed – just isn’t available to us when we are building our sites. Although server-side solutions to this problem have existed for a while, these techniques rely on assumptions being made about the user’s device and browser and so can only get us part way to a desirable solution. A client-side solution based on information the browser has about the user’s conditions is what we need.

And as luck would have it, that’s exactly what we have! A group of industrious designers and developers, known as the Responsive Images Community Group, worked on a solution to the problems discussed above and had their work added into the official HTML specification which has now been implemented in all major browsers.

The RICG solution is comprised of two new features: a new HTML element, <picture>, and two new attributes, srcset and sizes which can be used on the existing <img> element. Oftentimes, using srcset and sizes will be enough and you won’t need to use the <picture> element at all.

A detailed article on Responsive Images can be found on Mozilla Developer Network, but in a nutshell the solution allows you to provide a list of different sized images to the browser along with hints as to their dimensions and how big they should be on the page. Given this information and the information the browser has about the user’s current browsing context, it is able to pick the most appropriate image to download and display.

Lazy Loading

Similar to the idea behind critical CSS, why send images to the user if they aren’t going to see them right away? Lazy loading is a technique which allows us to replace non-visible images with a basic placeholder which is then replaced when the image scrolls into view.

There are a number of JavaScript libraries you could use to implement lazy loading but recently browsers have begun to implement a new API, IntersectionObserver, which will allow developers to write more succinct and better performing code to implement this feature.

Web Fonts

Web fonts can add a lot of personality to your site but add extra weight and can slow down the rendering of the page. Using a font service such as Google Fonts or Adobe Fonts is probably a more sensible idea than hosting and serving the fonts from your own infrastructure. These services work hard to ensure they deliver fonts quickly in a way which doesn’t impact page render times too much, saving you time to work on other optimisations! Hosting fonts on a separate domain to your own also allows browsers to download more assets in parallel, which is yet another performance enhancement.

Using a Content Delivery Network

Just as using a font service can increase performance, so too can using a Content Delivery Network (CDN) for the same reasons.

CDNs are 3rd party services which allow you to host your static assets – CSS files, JavaScript files, images etc – on their servers instead of your own. These servers are highly optimised for delivering these kinds of assets and, moreover, are hosted all across the globe. What this means in practice is that when a visitor’s browser requests assets from your page, a connection will be made to whichever of the CDN provider’s servers is closest to them. For example, if your web hosting was based in London and you received a visit from a user in Tokyo, they would download your assets from their nearest CDN, greatly reducing the round trip time for these assets.

Code Quality

It should go without saying that your code quality can also effect the performance of your site. If your HTML, CSS and JavaScript is bloated, not only will it take longer to download but the browser will take longer piecing everything together and applying the correct styling and behaviour to everything.


Using semantic HTML is not only good for accessibility and SEO, it can also give you access to native browser functionality, saving you time (and code!) from building custom components and controls.


Additionally, if you are using animation make sure that you’re taking advantage of improvements in CSS and hardware optimisations in modern browsers.

Historically, libraries like jQuery were a quick and powerful way of adding animated interactions to your site but the code powering these animations was, naturally, mostly JavaScript-reliant and very processor intensive. Lower-powered devices (like phones) or systems under heavy load could struggle in displaying smooth animations, often manifesting as stuttering on the page.

Thanks to CSS3, animation can and should be done with CSS, which is the appropriate technology to use for presentational aspects of your site. Using CSS for animation also means the browser can leverage hardware acceleration by offloading the animation calculations to the user’s GPU which is much better optimised to handle such code over the CPU. This reduces CPU load, improves visual performance and helps prolong battery life on portable devices.

On the Server

The majority of performance improvements occur on the client side, but there are still things you can do on your server to speed things up as well. First and foremost, you should enable HTTP/2 which dramatically increases the number of parallel downloads a browser can make and secondly, if you don’t have a CDN in place, ensure your static assets have far-future caching enabled. This tells a user’s browser that these assets are unlikely to be undated regularly so it can safely store them in its local cache. On return visits to your site, the browser will pull these assets from its cache rather than your servers to significantly reduce load times.

Tools & Resources

Most web browsers have performance monitoring built into their dev tools, which can help you to pin point issues on a specific page. Beyond that, tools such as WebPagetest and Google Lighthouse can give you powerful insights into performance bottlenecks and also help you test performance on multiple browsers and devices. These services also provide APIs which can be used for automated testing and monitoring purposes.

For a deep dive on the subject, Responsible Responsive Design by Scott Jehl and Designing for Performance by Lara Callender Hogan are both excellent reads.


Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *