Everything about Web and Network Monitoring

The Chronology of a Click, Part XV

Whew! That was a long series. But we’re finally at the end. Let’s wrap up with the top ten things to consider when our thoughts turn to website performance. These are our top ten best practices.There are many performance tips out there, but performance may be affected more by how the development team thinks about performance. Its priority, the number of person-hours allocated to it, and the willingness of individual developers to change their habits as their programming environment changes all contribute to performance. Here are the top ten things every developer should keep in mind.

 

#1 – It All Goes Back to the User’s Experience

We created our website with a goal in mind. That goal is usually, but not always, profit. Who is the final judge of our success? Whose opinion matters more than anyone else’s?The people who use our website will decide its ultimate fate. They will compare it to all the other websites that do approximately the same thing. They will also consider the option of not using any website, perhaps accomplishing the task some other way or perhaps not doing it at all. Our website is up against some stiff competition.If the users don’t use our website, it has no purpose. It will shrivel up and die. It may happen suddenly or it may cling to life for a long time, but it will die.
 
A website without users is meaningless.Performance is not the only factor users consider, but there is solid evidence that it does affect their decision rather dramatically. We used to talk about the 8-second rule, which says if a web page isn’t fully loaded and ready to go in 8 seconds, our users abandon us. However, recent studies have shown that the 8-second rule is nowadays the 2-second rule. Have the web page up and running in two seconds or say bye-bye to the users.Any definition of performance that does not include measurements from the user’s perspective is flawed. Website performance must be defined by how long the user waits. This series talked about loading a web page in response to a user’s click. In that context, performance is the time from the click to the time the new web page is fully interactive. This is the relevant timeframe for our discussions.
 
If we do anything during the relevant timeframe that can be done outside the relevant timeframe, we are contributing to any performance problem that may exist. Example: Much can be done during the build process, but seldom is. Another example: Much can be done by offline batch processes, but seldom is. The biggest example: Dynamic web pages need not be dynamic. They can be created as static web pages by an offline batch process that runs every so often (as long as every so oftenis often enough that our users don’t notice).

 

#2 – KISS

For our discussion, KISS stands for “Keep It Short and Simple.” This is one case where size really does matter.Much of our delay comes from getting multiple streams of bytes from a remote machine (the server). Each byte takes time to transmit. One byte alone isn’t a big deal, but the more bytes we transmit, the more time it takes. The bytes include content, images, HTML, CSS, JavaScript, comments, whitespace, and perhaps more. Each is therefore a target for minimization.Some of the delay is because of cookies. Every cookie for that web page and its ancestor directories and its ancestor domains are transmitted to the server even if they’re not needed. Transmitting components from a cookieless domain or an IP address solves this problem.Some web pages contain duplicate scripts and/or style sheets. Some browsers actually download duplicates multiple times. Either the developers or the browser manufacturers (or both) should stop doing this. Eliminate duplicates! 
 
JSON is more compact than XML, but doesn’t have XML’s close ties to the DOM. There may be opportunities to reduce download size by using JSON instead of XML.Image size can be reduced by reducing the resolution, reducing the colour depth, and stripping out the metadata. Different image formats yield different file sizes, so create the image in multiple formats and pick the smallest file size and compression level that provides acceptable quality.JavaScript, CSS, and HTML should be minified. Strip out the extra whitespace, including newlines. Eliminate comments. Keep identifiers short. There are tools available that will do this for us, but some of them introduce errors. Test their results.On the server side, it is sometimes possible to reduce the number of calls to external services (e.g., databases), which may or may not give better performance.

 

#3 – Needed-Now vs. Needed-Soon vs. Maybe-Needed-Later

Some components are needed before the web page can be truly interactive. Even loading them after onLoad fires won’t help. The user needs them immediately. Loading them after onLoad fires may also cause the page to jump around while the user watches. I call these the needed-now components.Some components can wait until after onLoad fires, but not for long. The user needs them visible and operational within a few seconds. I call these the needed-soon components. Example: Anything below the fold won’t be viewed immediately, so it may not be a needed-now component.Some components may not be needed in the first few seconds and perhaps may not be needed at all. I call these the maybe-needed-later components.Needed-now components should be downloaded inline and immediately, with style sheets in the <head> and scripts at the bottom. However, a script that outputs content to the page should use document.write() or slip into raw HTML mode. In both cases, it should be inlined at the point the output appears. Directly accessing the DOM is a bad idea.
 
Needed-soon components should be executed in response to the <body>‘s onLoad event and in the order we expect the users to need them. Downloading them before onLoad (by using defer or async) may or may not yield better overall performance. Try it and see.
 
Maybe-needed-later components should be downloaded with XMLHttpRequest() after onLoad and after the needed-soon components are fully loaded. However, the container (e.g., <div>) we put them in should be considered a needed-now component and should use absolute or fixed positioning. Providing an accurate progress indicator puts the user at ease and lets him know how long it will take, which helps him decide whether to go for a coffee now or later.After all the needed-now, needed-soon, and maybe-needed-later components have finished downloading and executing, why not download cacheable resources that may be needed on the next page the user visits? True, some mindreading is required, but a little bit of data mining on historical workflows may make us very gifted mind-readers. Downloading ahead of time is called “preloading.” 

 

#4 – CSS & The DOM

Cascading style sheets (CSS) and the Document Object Model (DOM) are combined into one section because changes to the CSS trigger changes in the DOM and changes to the DOM can change the CSS.Designing CSS selectors poorly is the primary source of performance problems in CSS. Reflows are the primary source of performance problems in the DOM. 

 

#4a – CSS Selectors

CSS’s strength is its flexibility, which comes partly from its system of selectors. As we see so often in life, though, its strength can also be its weakness. Using selectors poorly can cause performance problems.Now let’s make one thing clear: Not all selectors cause performance problems. The problems we’ve seen all stem from a specific, fairly well-defined subset of CSS. Steve Souders suggests:

“For most web sites, the possible performance gains from optimizing CSS selectors will be small, and are not worth the costs. There are some types of CSS rules and interactions with JavaScript that can make a page noticeably slower. This is where the focus should be.”

 

Some suggest avoiding CSS selectors completely. As Steve Souders suggested, I don’t think we need to go that far. However, it’s probably a good idea to avoid the following as much as possible:

  • too many rules (there’s that KISS principle again)
  • id selectors with anything preceding the “#”
  • class selectors with anything preceding the “.”
  • descendant selectors that descend too much
  • overqualified selectors (see Buckthorn’s comment in Souders’ article)
  • universal rules
  • :hover
  • counting from the end (e.g., :last-child, :last-of-type, :nth-last-child(n), :nth-last-of-type(n))
  • anything a novice maintenance programmer would struggle with (likely a performance problem, but will cause other problems, too)
  • CSS expressions (only allowed in certain non-conforming browsers; now deprecated in those browsers)

Most of these tips come from Writing Efficient CSS, written 2000.04.21 by David Hyatt.

 

Very generally speaking, an id selector (e.g., #menuitem) is more efficient that a class selector (e.g., .GaebelQuote), which is more efficient than a child selector (e.g., p>img), which is more efficient than a descendant selector (e.g., p img).

Some suggest that CSS performance is a non-issue for most websites because their CSS is simple enough not to become a problem. It is still a good idea to avoid the above selectors to keep future performance issues at bay.

 

#4b – Reflows

The browser’s layout engine has to lay out a frame in the render tree when the parser, the CSS, or a JavaScript changes the frame’s layout information. If the change affects the frame’s size or position, affected frames also need to be laid out again. This is called a reflow. The problem is that this snowball effect seem to happen much more often than we expect. Reflows can significantly affect performance. [This problem is covered in the previous episode of this series.]The browser’s rendering engine reads the frames in the DOM’s render tree and uses this data to display the text and images to the user. Because it runs concurrently with the layout engine, it displays changes as they occur or with a slight delay. The users get to watch the show as the page changes and changes again, which is a distraction from what they should be doing. 

 

#5 – Compression

Compression not only dramatically affects performance, but also is very simple to implement. It leaves me scratching my head when I see how many web sites do not compress HTML, CSS, JavaScripts, JSON, XML, and all other text data.Did you know that some hosting companies don’t have compression installed? Others have it turned off for some or all text resources, and they don’t give us the power to turn it on. Others have it turned off, but do give us the power to turn it on. Find out how the hosting company handles compression before signing on the dotted line. 

 

#6 – Caching & Memoization

Caching and memoization store data somewhere fast instead of fetching it repeatedly from some slow place. The first time we access the data, we have to go to the slow place, but every time after that, we can get it from the fast place. Example: After accessing a web page from a server in a land far, far away, we can copy it to our hard disk, then access the disk copy the second and subsequent times. A local hard disk is much faster than traversing the Internet.
 
There are many different types of caching:

  • browser caching,
  • intermediate caching servers,
  • caching of file contents in memory instead of on disk,
  • HTML produced by the web server’s content generator,
  • server-side opcode caching,
  • resolve file path caching (relative->absolute paths),
  • query caching in the database,
  • client-side opcode caching,
  • caching session data in shared memory instead of on disk,
  • program-controlled memoization,
  • and others that I may have missed.

 

Some forms of caching need to be turned on. Some need to be configured. Some are built-in and automatic. Developers need to make sure caching is configured optimally for their websites and that it is actually being used.

Intermediate caching servers and browsers need to know when each component expires. Until that time, they will use the locally-cached copy. After that time, they will ask the server whether the locally-cached copy is still valid. If it is, they will still use it. If it isn’t, the server will send them the newest version, which they will then cache. We need to make sure the expiry date is as far into the future as possible.

To maximize caching, set all expiry dates 100 years into the future. If a component needs to be changed, rename it (and all references to it). Never reuse a name.

 

#7 – Number of Connections

Every connection has a front-end cost because of the handshaking. SSL connections have a greater front-end cost than unencrypted connections. There is also a slight back-end cost.Concurrency can yield performance improvements, but there’s always a limit to it. When it comes to connections and their associated overhead, additional concurrency carries additional cost. This tradeoff means that too much or too little concurrency can degrade performance. Finding the optimal level takes a lot of fiddling about, so many people avoid it.Both the browser and the server limit the number of connections per browser/server combination. The lower of these two limits is the maximum number of concurrent connections allowed, which may be too low for some web pages. We can double the number of connections by serving resources from two separate domains.
This can be as simple as serving some from a subdomain, but a content delivery network (CDN) may be able to provide even better performance. Keep in mind, though, that this is useful only if increasing the number of connections will improve performance.SPDY uses one connection. Its creators believed that a single connection, if used appropriately by the browser and the server, can yield better performance than multiple connections. So far they’ve been proven right, but not to the degree they expected. I’m sure this will be tweaked over the next few years, but the concept already delivers simplification coupled with better performance.

 

#8 – Third Party Resources

Sometimes it’s faster to buy it than to develop it. This is even more so in today’s free software world,Third-party resources (scripts, databases, ads, widgets, etc.) can and usually do slow down our websites. This functionality-performance tradeoff needs to be managed. We need to decide how much performance degradation we are willing to accept in exchange for not doing the devlopment ourselves.Some third parties don’t put forth much of an effort to optimize performance. Others do.Several techniques for downloading third-party scripts have been proposed and refined in blog posts over the last few years, with varying degrees of success. Proposals often lead the major third-party providers to change their scripts, but not always.Performance isn’t the only issue here. Third-party resources can also be a single point of failure (SPOF).
If the third-party’s server isn’t available for any reason, how long will your user’s browser wait before timing out? Thirty seconds is typical – do you think your users will accept a 30 second delay in reaching the point of interactivity? Worse yet, will the missing resource render your web page useless?Should we completely eliminate third-party resources from our web pages? We may not need to go that far. But we do need to know the risks so we can design around them. 

 

#9 – Develop Good Programming Habits

Individual developers cannot afford the luxury of programming according to their own personal style. Many (may I say most?) do not want to change because they are comfortable with what they are used to. The vast majority of people (not just developers) think that way, but it’s ironic to see agents of change resisting change.Unfortunately, the Internet and the World-Wide Web are in a constant state of flux. What performs well today may not perform well tomorrow. The best-of-the-best developers keep up with these changes and alter their personal styles accordingly. When they adopt a new style, performance is one of their considerations.We’re not talking trivialities here. Performance doesn’t care whether we indent two spaces or three (because we minified it before transferring it to production). We’re talking about things that stop our web pages from loading quickly, things like releasing connections when we’re done with them, or never relying on the garbage collector to fix our memory leaks, or never using SELECT * in SQL, or keeping as much as possible outside the loop, or putting as much time into unit testing as we do into writing code. 

 

#10 – Benchmarking & Monitoring

Once we have our websites up and running at a good clip, we can’t just sit back and relax. The job’s not done; it’s just started. Because the Internet, browsers, operating systems, and supporting software change continuously, our websites operate in a highly dynamic environment. Any change or combination of changes can slow some websites to a crawl.The only way to deal with a dynamic environment is to keep a close watch on it. We can go out and hire a bunch of techies to continuously take measurements, compare them to previous measurements, compare them to service level agreements, and notify us if something bad happens, but the cost is prohibitive. Surely there’s some way to get a computer to do all this grunt work, isn’t there?
Well, I’m glad you asked. As a matter of fact, there is a way to monitor key measurements and notify us of significant changes. The friendly people at Paid Monitoring say that their system can monitor absolutely anything, and I believe them. The best part is they have a robust, entry-level monitoring system that’s free (I just love free stuff). Yes, I work for them, but their system can prove itself easily. Just sign up and give it a shot. Did I mention that it’s free?So what should we monitor? The first (and most critical) measurement is page load time.
This is the time from the request being issued to the time the web page is fully loaded and interactive. This metric should be less than two seconds for every page on the web site. Why two seconds? Forrester, Akamai, Walmart, Microsoft, Google, and others have all presented research results that show users give up and leave after two seconds. And we don’t want our users leaving, now do we?Beyond page load time, each component discussed in this series should be monitored. If the component is slow, but it is not a bottleneck that slows down the page load, minor changes aren’t critical …today. However, changes should be watched and analyzed. They may be an indicator of something just around the corner. It’s nice to have advance warning about problems, but not so nice if we’re not listening to those warnings.

 

Conclusion

It’s been a joy writing this series. I can only hope that it was as good for you as it was for me. Perhaps I’ve given you some helpful tips or explained something that used to be a bit of a mystery. Whatever value you received, I’m happy to have played a part.by Warren Gaebel, B.A., B.C.S.
Post Tagged with

About Warren Gaebel

Warren wrote his first computer program in 1970 (yes, it was Fortran).  He earned his Bachelor of Arts degree from the University of Waterloo and his Bachelor of Computer Science degree at the University of Windsor.  After a few years at IBM, he worked on a Master of Mathematics (Computer Science) degree at the University of Waterloo.  He decided to stay home to take care of his newborn son rather than complete that degree.  That decision cost him his career, but he would gladly make the same decision again. Warren is now retired, but he finds it hard to do nothing, so he writes web performance articles for the Monitor.Us blog.  Life is good!