Ultimate Guide to Page Speed: Intermediate

Portent Logo-mark

In this post: The things you can do with a little help from your friends. The tale of two servers. Why breaking stuff isn’t always bad. We learn to get lazy.

  • GZIP, aka HTTP compression
  • Using a CDN
  • Parallel downloads
  • Lazy loading
  • Disk caching
  • To SSL, or not to SSL?

GZIP, aka HTTP compression

Most web servers can compress files before sending them to the browser, which then uncompresses them. That’s called http compression. It reduces the amount of pipe you use. It can be a huge page speed win, and while it does require getting your hands down into the server a bit, it’s simple enough that you can send a quick note to your webmaster and ask them to make the change.

GZIP is a compression utility for any stream of bytes, best utilized on text-based data like HTML, CSS, JavaScript, Fonts, and XML. Compressing content will minimize page load times, reduce the load on the server, and save bandwidth. All modern browsers will request GZIP resources by default, so it is important to make sure your web server has GZIP enabled. While GZIP compression can be handled in multiple ways, it is best done by the web server instead of the programming language.

If you have your own developer(s), the procedure is simple:

  1. Buy her a beer
  2. Ask very nicely if she’ll install mod_deflate…
  3. …and whether she’ll enable it

Easy.

I don’t have a web developer. And none of this makes sense.

If this is all some kind of strange, foreign language to you, you may want to avoid messing with server compression.

If you’re determined, though, here’s what you do:

Apache

For Apache, you’ll need to install a module called mod_deflate. Most Apache installations already include it.

Type

apache2ctl -M

at the command line. If you don’t have it installed, a quick web search will provide a ton of tutorials.

Here is a mod_deflate example for GZIP compression:

# gzip compression
AddOutputFilterByType DEFLATE text/html text/plain text/xml application/xml application/xhtml+xml text/javascript text/css application/x-javascript application/javascript
AddOutputFilterByType DEFLATE application/rdf+xml application/rss+xml application/atom+xml application/x-font-ttf application/x-font-otf font/truetype font/opentype

For NGINX, here is a similar example:

# gzip compression
gzip on;
gzip_static on;
gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/x-javascript application/json application/xml application/rss+xml font/truetype application/x-font-ttf font/opentype application/vnd.ms-fontobject image/svg+xml;

A nice tool we like to use for checking compression can be found here: http://www.whatsmyip.org/http-compression-test/

That’s it—a few lines of code and you’re compressing a whole laundry list of file types.

On IIS, it’s even easier

You can enable either ‘static’ or ‘dynamic’ compression in Internet Information Server by checking a box and/or editing a configuration file. Every version of IIS seems to have a different though easy way to do it. Check your favorite search engine. It’s easy. Promise.

Use a content distribution network

A CDN (it’s easier than saying “content distribution network” all the time) uses a distributed set of servers to deliver certain website files. A CDN usually delivers the ‘static’ files. It reduces file size and speeds delivery, making better use of the pipe.

That speeds up your site several different ways. A CDN:

  1. Delivers files from the server that’s geographically closest to the person visiting the site
  2. Compresses files using GZIP (see above)
  3. Sends cookieless files, reducing packet size (way nerdy—see the next chapter)
  4. Allows parallel downloads (see below)

It’s not that hard to set up a CDN. We won’t recommend any here—don’t want to seem biased—but your web provider may have recommendations.

Parallel download

Up to now, we’ve talked about saving space in the pipe—reducing bandwidth usage.

The best case, though, makes efficient use of the whole pipe. That is at least, when HTTP/1.1 was the standard protocol. We have entered the HTTP/2 era, however, and parallel downloading can have negative effects on performance under HTTP/2.

HTTP/2 Considerations

The technologies introduced in HTTP/2 have drastically improved load times of assets coming from the same domain. In HTTP/1.1, an ordered queue was created for requests from each unique domain, and thus created a blocking scenario. That is why Parallel Downloads is a recommendation under HTTP/1.1. That is no longer the case with HTTP/2. Click here to read more about HTTP/2 and how it improves site speed.

HTTP/1.1 Recommendations

If you have extra space in the pipe, you can use that to load multiple files at once. That speeds load time.

This technique, known as parallel loading (at least that’s what we call it), is easy: Put different files on different domains or subdomains.

Using a CDN

If you use a Content Distribution Network (CDN), you’ll set up one or more subdomains to deliver ‘static’ content. So the CDN will set up parallel downloads for you.

Doing it yourself

Even if you don’t use a CDN, you can set up parallel downloads. Create subdomains (talk to your web hosting provider if you can’t do this on your own). Place static files there.

Here’s an example:

  1. Set up images.example.com
  2. Put some or all of your images there
  3. Watch the magic happen

Look at this example. All three images start loading at the same time. They’re using available pipe more efficiently. That means a faster page load.

Three files, three subdomains, parallel loading

Lazy loading

Usually, a browser loads every asset on the page, all at once. So, if you visit a page with lots of below-the-fold images, you download every image upon visiting that page. The typical visitor only sees the information above the red line (the fold) when they visit eigene-homepage-erstellen.net:

Even the chihuaha gets loaded

But the default loading behavior would deliver every image, even below the fold.

Lazy loading more efficiently uses the pipe and improves the browser experience. It’s also dang cool. Here’s how it works:

  1. You visit a web page
  2. The page loads visible, above-the-fold images first
  3. The page loads the remaining content only when you scroll down
The below-the-fold images only load when you scroll

It might require some serious programming expertise to build out your own lazy loading solution. Fortunately, you don’t have to. Some clever people have written javascript libraries for you:

jquery Lazy Load, by Mike Tuupola

YUI Lazy Load, by Steven Olmstead

Check out those pages, follow the instructions and you’ll have lazy loading in place.

Again, test this stuff before you go live. Can’t say it enough times. Test it. Did we mention you should test it?..

Disk caching

The server’s feeling neglected. We haven’t provided a single server-based tip for site speed.

Weep no further, noble server. It’s your turn.

If your site uses a content management system (like WordPress or SiteCore), an e-commerce server (like Magento or DemandWare), it’s dynamically generating pages. The process works like this:

  1. You visit eigene-homepage-erstellen.net
  2. The server gets your browser’s request
  3. The server fetches content from a database
  4. The server merges that content into a template
  5. The server delivers the result to you

Lots of other sites may dynamically generate pages. Content management systems and e-commerce are the most common.

This five-step process adds up. Even if your site is simple, dynamic page delivery means the server has to hit the database every pageload, and that increases response time.

In this case, those extra steps slowed page load time by over one second. Oy:

Lack of disk caching slowed page 'time to first byte'

It’s infuriating, because it’s so easy to fix. Any web servers and every CMS/online store software worth a plug nickel has the ability to cache dynamically-generated pages on disk. In this case, “cache” means “store the generated pages somewhere, so that server doesn’t have to fetch content from the database again.”

Disk caching stores the pages on the disk drive. It changes the five-step process to this:

  1. You visit eigene-homepage-erstellen.net
  2. The server gets your browser’s request
  3. The server fetches content from a database
  4. The server merges that content into a template
  5. The server delivers the result to you

The only delay is the time required to grab the page from the drive.

Developers raise the valid concern that page updates may not appear on the website as quickly as desired. If you edit a page that’s cached and the server only ‘refreshes’ that page once a day, your edits may not appear for 24 hours. Not good.

Fortunately, a good disk caching configuration lets you quickly refresh pages. The first time anyone visits the edited page, the server will regenerate it. All visitors will then see the new version. Some software even does this automatically, refreshing pages when edits are saved.

You can speed things up even more by storing files on a dedicated caching server and/or in memory. More about that in the Advanced chapter

On many CMSes (we’re never sure – is it CMS? CMSes? CMes?) disk caching just requires that you click a button. You can then ‘purge’ pages from the disk cache by clicking another button. On others, it’s more complex. That’s why this is in the intermediate rather than the novice chapter.

We can’t go into every disk caching configuration for every content management/e-commerce system out there. If you’re a WordPress addict (we are), take a look at our special, just-for-you chapter.

Only load what you need

This may seem obvious, but: If your home page uses a giant chunk of CSS that no other page uses, put it into its own CSS file. Load that file only on the home page.

Do the same with javascript.

This is one of those forehead-smackers we’ve missed now and then. Hence its inclusion here.

Reduce HTTP requests

Every file loaded is one HTTP request. For each HTTP request, the client requests the file, which the server provides. Then client has to download it.

Even with parallel downloads, every HTTP request is a tiny handshake between the client and server that slows performance.

A few easy ways to reduce HTTP requests:

Combine javascript files into fewer files. Do the same with CSS

Putting javascript into separate .js files helps. We’ve already suggested that. But, if you create lots of little .js files, then visiting web browsers must make lots of little requests. Little requests take up the same amount of time as big ones. So they drag things down.

Combining those files will reduce the number of HTTP calls by reducing the number of .js includes on the page.

You can do the same with CSS.

Use sprites

Image sprites combine lots of little images into one larger one. Visiting browsers load the larger image. Your site positions the larger image to show the correct smaller image (called a sprite) at each location on the page.

Here’s an example: Say you use 30 icons on each page of your site. If they’re separate files, visiting browsers make 30 separate requests. Combine them into a single file, though, and visiting browsers make a single request.

We don’t like sprites

Actually, we’re not a fan.

Why not sprites? Lots of page speed tools recommend the use of image sprites. We don’t, because when you’re building responsive sites, sprites are a pain in the tuchus. And we build a lot of responsive sites. Still, it’s a valid tactic. So look into it.

To SSL, or not to SSL?

A while back, Google said they’d bump sites using SSL (secure sockets layer—those ‘https’ addresses you often see on e-commerce sites) up in the rankings.

The entire internet lost its collective mind. Webmasters scrambled to move their sites to secure https connections. Articles abounded:

We can only shake our heads

Unfortunately, many didn’t weigh the benefits versus the problems. Unless you carefully optimize your server, it’s a performance killer, because every SSL connection requires multiple ‘round trips’ between the server and the client. The loss in performance is bigger than any theoretical Google rankings improvement.

In fact, we finally moved our site to SSL in September 2015. We haven’t seen any improvement. Nor have any clients who moved, or any of our colleagues. Sometimes I think Google likes to bat us around like they’re a cat and we’re a ball of yarn.

Clearly, SSL is more secure. If you’ve got people providing sensitive information or just filling out forms, it can help your visitors feel more confident. They’re more likely to trust you with their information.

But those visitors are less likely to buy or contact you if your site takes a long time to load. Move to SSL, by all means. But do it carefully. Research SSL acceleration solutions and see how the move will impact other website gadgets and tools. SSL abounds with unexpected consequences, and we’ve seen some real doozies.

This gets complicated

You can take the tactics in this chapter too far. One tactic can affect others.

For example, if you work to reduce HTTP requests, keep other tactics in mind: Parallel downloads, caching and selective loading all require more, smaller files. Reducing HTTP requests generally leads to fewer, bigger files. At some point, one tactic may hurt more than the other helps.

There’s no rule here. Just be mindful and carefully balance your tactics. Don’t rush headlong into a single tactic, taking it to an extreme. Sort of like SSL.

And hey! Guess what! You should test all of this stuff before you launch it on a live site!

And breathe, and onward to Advanced. For those brave souls ready to make a serious commitment to user experience and a blazing fast web presence.

Chapter 6: Advanced – Varnish, Apache and nginx





New Call-to-action




Start call to action

See how Portent can help you own your piece of the web.

End call to action
0

Leave a Reply

Your email address will not be published. Required fields are marked *

Close search overlay