Online education portals like Udacity and Coursera are really changing the world of remote learning in significant ways. By making free and high quality education accessible to a global audience, these platforms are opening up undreamt of possibilities for communities around the world to improve, grow, and prosper in the digital economy of the 21st century. Education at top tier colleges and universities has traditionally been a social and economic privilege, but now anyone can join in the learning revolution by sitting in virtual classrooms with the world’s best and brightest educators. Whether this involves learning how to code and build smart phone apps, or starting up a new business, or learning about public health literacy, the sky is the limit of what’s now possible.

Everything about Web and Network Monitoring

The Chronology of a Click, Part XI

This series has been following the progress of a request and its response. The request was initiated by an end-user’s click on a link, which signified his request for some web page to be downloaded and presented to him. So far in this series, we’ve seen the back-end (i.e., the server) fill the request and send it back to the client. Today’s episode deals with the client-side script.In his September 2007 book, High Performance Web Sites, Steve Souders told us that only 20% of the user’s waiting time is due to the back end. The remainder, a whopping 80%, is due to client-side processing. This and the next part of the series deals with that notorious client-side processing. This part deals with the client-side script. The next part will deal with the rendering.

In his February 2012 follow-up, Steve ran some tests to see if his 80/20 measurement, which he calls The Performance Golden Rule, still applies. It does.

These published results are not overly relevant to us because we are concerned with our website, not an average taken from some preselected set of websites. However, Steve does give us a simple method to approximate back-end time. Simply measure the time to first byte.

Several authors have suggested that optimization efforts should focus on front-end performance because that’s where most of the time is spent. While I expect they are right almost all the time, there are cases where optimizing the back end should not be ignored. Remember, the front end handles one request at a time, but the back end can handle hundreds, thousands, or more in that same time.

Performance Consideration: 
Forget the ratio. Use Steve’s approximation (time to first byte) to measure the total time spent on the back end and the total time spent on the front end. If either one of these exceeds your “threshold of acceptability,” you now know whether to start on your front end or your back end.
What? You don’t have a “threshold of acceptability?” Do you mean you are willing to accept whatever performance you happen to achieve? Do you mean that you have no definition of “poor performance” for your webapp? Hmmm. What are we going to do – wait until end-users start to complain? Once someone complains, doesn’t that mean we’ve already degraded customer satisfaction and sent some of our customers to our competitors?

Here’s a suggestion for a starting point: Set a limit of 500 milliseconds for the back end and 1,500 milliseconds for the front end. If you don’t like my numbers, that’s okay, but make sure you define your own. Then go to work on anything that takes longer, whether it’s back end or front end.

What is Front-End Processing?

For the most part, front-end processing is a client-side script (usually JavaScript) that sends its output to the DOM’s rendering tree, which then makes the web page visible and interactive. This part of the series is about JavaScript. Rendering (CSS and the DOM) are discussed in the next part.

JavaScript

JavaScript is an implementation of EcmaScript. It contains the programming constructs one would expect of a general-purpose programming language, but has its own take on “object-orientation.”

Performance Consideration: 
Here are some performance tips that are common to many programming languages:
  • Use a profiler, a code-checking tool, and a third party to evaluate the code before transferring it to production.
  • Eliminate allmemory leaks. Objects, properties, connections, processes, etc. should be deleted when you’re done with them. Never rely on a garbage collector to do this. Add more memory if you must, but first locate and fix the bad code that caused the problem.
  • Inline function calls as much as possible, but watch out for the time-space tradeoff. If you inline everything, the code bloat may be unbearable.
  • Everything that can be done outside the loop should be done outside the loop.
  • Avoid heavy nesting.
  • Comply with all standards.
  • Compiled code executes faster than interpreted code. [JavaScript is interpreted, but how can you avoid it? See below.]
  • When dealing with interpreted code (like JavaScript), run it through a minifier to remove comments and extraneous white space.
  • Use local variables instead of global variables.
  • Use memoization to cache the results of time-consuming processes.
  • Use primitive operators instead of function calls.
  • Whenever possible, unroll the loop. Example: Use j++;j++;j++;j++;j++; instead of i=5; while(i–) j++;. [Of course, in this case, j+=5;would be much better.]
  • Use x=a.length before the loop and i<x as the loop condition rather than using i<a.length as the loop condition.
Performance Consideration: 
And here are some JavaScript-specific tips that may or may not apply to other languages:
  • Do not use eval. If you must use eval, pass it a function instead of a string.
  • Use a for loop instead of a for…in loop whenever possible.
  • Avoid with.
  • Pass a function instead of a string to SetInterval() or SetTimeOut().
  • Avoid anything not included in ECMA-262, 5th Edition strict mode.
  • Whenever possible, avoid new. Use “xxx” instead of new String(“xxx”). Use {} instead of new Object. Use [] instead of new Array. [try it]
  • Don’t use global variables. If you must use one extensively, assign it to a local variable and use the local variable instead.
  • Use var. Example: Use var i = 0; instead of i = 0;.
  • This tip “use x += “a”; x += “b”; instead of x += “a” + “b” to avoid creating a temporary string” used to be true, but with recent browsers, the opposite is now true. [See this jsperf test.]
  • String literals are often faster than string objects. Measure your code’s performance both ways, then choose the fastest. [example]
  • Optimize the order of conditional expressions to take advantage of short-circuit evaluation. Example: Use a && b instead of b && a if a is likely to be false more often than b.
  • Minify. Long identifiers (the name, not the value) require more processing time, so shorten them to one or two characters. Remove comments and extraneous white space. Minification should be done during the build process so we can keep the original source code for the developers and serve the minified version to the end users.
  • Use === instead of ==.
  • Keep try…catch out of performance critical code.
  • Do not create setters/getters that do nothing more than set/get a value.

By definition, JavaScript is interpreted rather than compiled.

Performance Consideration: 
Compiled JavaScript would be much better, but we’re stuck with what we’ve got. Some browsers partially compile to an opcode, then cache the opcode. This may help somewhat, but this decision is made by the browser manufacturer, not the developer. Make sure the browser is configured properly to make full use of this feature if it is available.
JavaScript can output HTML to the browser’s rendering engine with the document.write function call or by including raw HTML outside the <script> … </script> sections. Either way, the HTML is sent to the browser’s rendering engine, which adds it to the DOM tree.

JavaScript can also access the DOM tree directly. It can add, modify, and delete nodes to its heart’s content. It can even make decisions based on the contents of the DOM tree.

Performance Consideration: 
Stay away from the DOM tree! Output HTML directly and let the rendering engine take care of it. [Note: Some sources say the opposite, but the author’s testing showed that appending to the DOM tree is slower than document.write 86% of the time and slower than raw HTML (outside the <script> block) 93% of the time.]
Sending output to the document is where JavaScript leaves off and rendering takes over. CSS and the DOM are discussed in the next part of this series, so keep your eye on the Monitor.Us blog.

JavaScript code is executed serially when it is received. During execution it blocks rendering and some downloading. However, JavaScript can use XMLHttpRequest to download components and data from the server during execution. This can be done serially or concurrently.

Performance Consideration: 
Concurrent processing can offer performance improvements over serial processing, but JavaScript is a serial language.
A popular technique is to postpone blocks of code until after onLoad. Although this technique is not true concurrency (it is merely postponement of serially-executed code), it is still useful. It helps create a perception of high performance because onLoad is the point at which the user can see and interact with the web page. Since performance is defined in terms of how long the user waits for something, and he is now reading the page rather than twiddling his thumbs, code that executes after onLoad does not count as poor performance (unless the user cannot continue reading because something is not yet visible). Do you remember the old saying, “perception is reality?”

Performance Consideration: 
JavaScript’s use of XMLHttpRequest to dynamically request components is true concurrency because two things happen at the same time. The request travels to the server, the server processes the request, and the response travels back to the client all without delaying JavaScript execution. The JavaScript process can access the returned data when it is available, then treat it as data, HTML, JavaScript, images, stylesheets, or anything else. The important point is that JavaScript can continue executing while XMLHttpRequest handles the download concurrently in the background.

Try this as a skeleton for JavaScript programs. Let me know how it works for you.

  issue XMLHttpRequests for the needed-now JavaScripts
  issue XMLHttpRequests for the images
  issue XMLHttpRequests for the dynamic data
  issue XMLHttpRequests for the needed-later JavaScripts
  while waiting for those downloads:
    download the style sheets
    build the web page
  when onLoad is triggered:
    wait for, then handle, the downloads as they arrive

Note 1: The needed-now JavaScripts are the ones that should be executed before the user starts interacting with the web page. The needed-later JavaScripts are the ones that will execute in response to a user-generated event.

Note 2: Use absolute positioning/sizing for all divs, iframes, tables, and any other rectangles that may affect layout.

Note 3: If the downloads must be handled in a specific order, the programmer must provide the logic programmatically. The downloads may arrive back at the client in any order.

Note 4: You might find a slightly different order better. If so, use that. The basic concept is to request as many concurrent downloads as possible, then build the skeleton of the web page, then deal with the downloads in the correct order (which may or may not be the order in which they arrive).

JavaScript provides everything imaginable, and then some. There are often multiple ways to do something. Some of those options are faster than the others.

Performance Consideration: 
With great power comes great responsibility. If we make a habit of always choosing the option with the best performance, then all our code will be closer to optimal. And all it costs us is a little effort to foster certain habits.
Example: If a developer gets into the habit of declaring all variables with var at the top of the block in which the variable is used, he no longer has to stop and think about global vs. local scope. All variables will be properly scoped because of the habit that was consciously created.

However, be wary of the trickster. Some coding options seem to be the fastest, but they are only the fastest in certain situations. Knowing they’re fast isn’t good enough; we have to know in which situations they are the fastest.

Conclusion

JavaScript seems to provide everything for everyone, including a new way to think of inheritance. This flexibility gives the programmer options at every turn. However, different options have different performance ratings, so JavaScript programmers need a higher skill level because they have to wade through a myriad of options.

JavaScript is interpreted, not compiled. Compiling is faster than interpreting. Opcode caching gets us part way there, but not all the way.

JavaScript does not offer concurrent execution; it executes serially. Exception: XMLHttpRequest lets us concurrently download data, HTML, JavaScript, images, stylesheets, or any stream of bytes we want. The download happens while JavaScript continues executing.

For more JavaScript performance tips, read my previously published Website Performance: JavaScript (which also includes tips about rendering, CSS, and the DOM).

As for rendering,  …to be continued in Part XII.

Post Tagged with

About Warren Gaebel

Warren wrote his first computer program in 1970 (yes, it was Fortran).  He earned his Bachelor of Arts degree from the University of Waterloo and his Bachelor of Computer Science degree at the University of Windsor.  After a few years at IBM, he worked on a Master of Mathematics (Computer Science) degree at the University of Waterloo.  He decided to stay home to take care of his newborn son rather than complete that degree.  That decision cost him his career, but he would gladly make the same decision again. Warren is now retired, but he finds it hard to do nothing, so he writes web performance articles for the Monitor.Us blog.  Life is good!