In the last post, we talked about some of the benefits that service workers can provide. And we did so in broad, general terms. In this post, we’ll look at the details of how to actually set up one up. We’ll also cover the basics of the service worker life cycle, a concept that is important to understand as we move forward.
For a while, the differences between native apps and web apps were readily apparent. But recently, the lines have begun to blur. In conjunction with the growing capacities of browsers, service workers are giving developers the ability to build into web apps functionality that was once out of reach.
Currenty, images make up around 64% of the average web page weight (1,598 kB of 2,480 kB), which makes them a great opportunity for performance optimization. Some basic tips include making sure images are the correct dimensions, in the proper format, and appropriately compressed. Today we’re going to look at the format part of this equation. Although JPG, PNG, and GIF currently make up 97% of the images currently being served, there are other formats that could be beneficial to consider. The focus of this article will be on WebP, an image format that provides a lot to be excited about.
When it comes to displaying a lot of images on a web page, lazy loading offers some great benefits. But there is a fine balance between loading only what’s necessary, and ensuring a smooth user experience. For instance, if the user scrolls fairly quickly, or if there is latency in loading the images, it’s possible that some images will not be fully loaded by the time they enter the viewport. One technique – using low quality image placeholders – tries to find the middle ground between keeping the user experience as smooth as possible without sacrificing all the performance gains lazy loading provides. We’ll take a look at how in this post.
It’s not uncommon to utilize 3rd party scripts from time to time, perhaps for analytics or social sharing or other services. One of the benefits of this is once we’ve link to the script, the 3rd party can make all the updates they want and we’ll immediately be using the updated code – we don’t have to download or update our code. But relying on a 3rd party also means we lose complete control over how that external file is served (how long is it cached, how it’s compresed, etc.). This may not be a big deal, but if you want to have granular control over as many aspects of the performance of the page as you can, there are times where it would be nice to not have to rely on external scripts.
Anyone familiar with front-end performance knows that using images efficiently is a big deal. By adjusting the size or compression (or format) of an image, you can see instant benefits. But what do you do when you have a large number of images to display on a page, and although they each may be fairly small, collectively they represent a huge amount of data that needs to be transferred over the wire?