Performance Bytes: Measuring what matters!

Sagar Desarda
6 min readApr 23, 2020

--

One Byte at a time!

Websites have greatly transformed and evolved over the years. The web pages have grown in size and complexity and focus is more-than-ever on managing front-end performance. Users, these days expect the web pages to open up like they would click a button on the remote. And, they are not shy with moving somewhere else for content if they are unhappy with the speed or responsiveness.

Now, you may ask — How much is a millisecond worth?

A British motivation speaker, Jay Shetty famously once said; to realize the value of a milli-second, ask the person who came second at the Olympics.

In a study published by Radware, they had mentioned that a 500ms delay increased the user frustration by 26%. Marissa Meyer and her team at Google had interesting findings that a 500ms delay in Google searches led to 20% drop in traffic. Walmart saw that every 100ms improvement resulted in up to a 1% increase in revenue.

PAGE TIMING METRICS

Today, I will discuss how do you measure the web performance; aka which are the key metrics that you may want to focus on:

First Contentful Paint (fcp): It is the time taken by an end-user to see something visible on the webpage — like that first text or the image on the webpage. It is different from First Paint which just detects any type of render on the browser but it may not mean much to the end user.

First Meaningful Paint (fmp): It is the time at which the browser paints the page with what the users are interested in and want it to appear on their screen. As you tell, this is subjective and could be open to interpretation. Further, the metric is extremely sensitive to small differences in page load, relies on how each of the major browsers have implemented it making it difficult to have something consistent across the board.

On the first load of the page, try to avoid more than one roundtrip. We have to play within the values of the initial congestion window (initcwnd) with TCP Slow Start. If you are powering your content with a CDN, CDNs also use different values to further get that additional network layer optimization. The guidance is to deliver above-the-fold content in 14.6 KiB which is 10 segments aka 1 roundtrip for a Linux-based OS.

Time to Interactive (tti): It is a very important metric that measures how long does it take for a page to be interactive. The longer the wait time is here, more is the user frustration leading to rage clicks. We all have felt the pain of visiting a web page where we see something rendered on the browser, but are unable to click anywhere on the webpage.

Visually Complete: It is the time at which the page is fully rendered above the fold for the end user. This is another great metric because it specifically looks at the content which is in the user’s pane of glass which the user can see without having to scroll.

Fully Loaded: It is the time at which the entire web page is fully loaded.

As we deep-dive into this further, some other questions that I would ask is how interactable is my content and if the interactions are smooth or not.

Source: https://web.dev

In order to be able to do that, here are few more metrics to closely look at:

First Input Delay (fid): It measures the time when a user first interacts with your site; say tap a button, click on a link etc. the time when the browser actually responds back.

Max Potential First Input Delay (mfid): It essentially measures the worst-case fid that your users might experience

Total Blocking Time (tbt): It is the time between the first contextual paint and time to interactive when the main thread was blocked to prevent input responsiveness.

Here is an excellent article on how to read and understand the waterfall charts and the different page timing metrics: https://nooshu.github.io/blog/2019/10/02/how-to-read-a-wpt-waterfall-chart/.

HOW DO I PLAN?

If you are to start an initiative in your organization to undertake such an effort, how you go about doing that. Here is how I would do it:

1. Measuring what matters: Identifying the metrics that mean the most to you site

2. Discovery: Traffic analysis and identifying the areas which have the most ROI

3. Defining the scope

4. Image Analysis

Content breakdown by bytes, for a popular e-commerce website

Images are typically around half the page bandwidth and hence it is absolutely critical that those are optimized so that the page loads quickly for the users, especially the ones with low bandwidth.

When we are drilling down further into the image formats, pngs are known to not compress well. It is time to look past the vanilla JPEGs and PNGs. Few formats like WebP, HEIF (heef), Zopfli PNG, Guetzli JPEG, JPEG 2000 (JPX), JPEG XR (JXR) help acheive much greater savings.

Companies like Cloudinary take it a step further and completely offload the heavy-lifting of managing, optimizing the servings these images (and videos) to the users.

5. Applying first principles

I. Splitting the code

There is a fine line between aggressively splitting the code and increasing the overall bytes downloaded by the browser to getting it just right. But when done right, you surely see the gains.

II. Slim down libraries

III. Shared Bundles

IV. Brotli/Zopfli Compression

All the modern browsers now support Brotli and Zopfli compression techniques. These compression algorithms help achieve higher compression than gzip (defalte/glib) but it could take long to perform the compression. One of the best practice would be to offload this to a CDN, if you are using one and have it do the heavy lighting.

V. Leverage HTTP2 Server Push, priorities, resource hints, dns-prefetch and preconnect.

HOW DO I MEASURE?

If you can’t measure it, you can’t improve it!

There are quite a few tools out there and essentially, I would break them down in two : Synthetic Monitoring and Real User Monitoring. Some of the good ones out there are:

Synthetic Tools: Gomez, Dynatrace

Real User Monitoring (RUM): Dynatrace RUM, New Relic Browser

I like RUM-based solutions better as it helps you truly paint the performance as perceived by your real users. But, I have also seen cases where Synthetic monitoring tools have been of huge help with their consistent monitoring around the clock. Because the monitoring is automated, you don’t have to rely on your real users to accurately paint a picture of how your site is doing, especially during low traffic hours. I would think of synthetic monitoring as ‘Active Monitoring.’ Think of that major code release on your website that happened during low traffic hours (obviously!). Synthetic monitoring tools will help you catch any potential issues much faster before the traffic ramps up on your website.

Lastly, I would leave with a message of building a performance-oriented mindset in your organization. With every new functionality that you add, every piece of code that you add; think as to what would it do with regard to performance. Use tooling and data to drive those decisions. You have may to do certain tradeoffs and it won’t be easy. But, the end-result is having a happy, satisfied customer base and we should always think backwards to that!

Have a performance-mindset. It would become a habit before you realize it.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Sagar Desarda
Sagar Desarda

Written by Sagar Desarda

The views in the articles are mine alone and do not represent my employer.

No responses yet

Write a response