Online education portals like Udacity and Coursera are really changing the world of remote learning in significant ways. By making free and high quality education accessible to a global audience, these platforms are opening up undreamt of possibilities for communities around the world to improve, grow, and prosper in the digital economy of the 21st century. Education at top tier colleges and universities has traditionally been a social and economic privilege, but now anyone can join in the learning revolution by sitting in virtual classrooms with the world’s best and brightest educators. Whether this involves learning how to code and build smart phone apps, or starting up a new business, or learning about public health literacy, the sky is the limit of what’s now possible.

Everything about Web and Network Monitoring

Website Performance: Client to Server Wrapup

Website Performance: Taxonomy of Tips introduced a classification scheme to help us organize the many performance tips found on the Internet.  Today’s article wraps up the discussion about the journey-from-the-client-to-the-server category by outlining a few tips that weren’t discussed in previous articles.

Minimize the Number of Connections

Establishing connections is relatively costly, so it is no surprise that performance is better with a smaller number of connections. One way to accomplish this is to combine multiple components into a single download. This eliminates all but one of those connections. Example: If we combine our play, pause, stop, forward, and backward images into one sprite, we can eliminate four of the five connections.

Scripts, images, and style sheets are great candidates for this tip. They can be combined and the number of connections can be reduced. However, keep in mind the needed-now, needed-soon, and maybe-needed-later categories described in Website Performance: Downloading JavaScript.

Keep It Short & Simple

Perhaps the easiest way to eliminate connections is to eliminate components. This is an example of the age-old KISS principle, which says to keep it short and simple.

There is a trade-off between simplicity and functionality, though. The more functionality we offer on a page, the bigger and more complex that page will be. This most likely means an increase in the number of components. KISS reduces complexity, but it can also reduce functionality.

There will be some resistance to the idea of reducing functionality because increased functionality is believed to provide more benefit to the user. However, keep in mind that it may just as easily confuse and irritate the user. Consider reducing functionality by splitting complex web pages into two. That approach may (or may not) be suitable for your application. Going to the extreme in either direction is not recommended.

IMHO, designing with human factors in mind and applying useability testing seem to have fallen by the wayside over the last few years. I am suggesting that there may be ways of offering more meaningful interactions by reducing the complexity and clutter on our web pages.

Fit the Request Into a Single Packet

Ideally the request sent from the client to the server should fit into a single IP packet. If it goes over this limit by even one byte, a second packet is required. Every packet used affects performance.

Packets are typically 1,500 bytes long, but other things need to fit in that 1,500 bytes, too. Packet overhead and browser-added headers are not under the webapp’s control, but they occupy space in the packet nevertheless and we cannot predict how many bytes they will use. Our goal, then, is not to limit our request to 1,500 bytes, but to make it as small as we can and hope that everything fits into one packet.

Since cookies and get/post data occupy packet space, our goal should be to minimize their size as much as we can. Consider using server-side state data (e.g., session variables, databases) instead of client-side state data.

Avoid SSL

HTTPS (HTTP with SSL) requires handshaking and negotiation that HTTP does not use, which means HTTP performs better than HTTPS. If the page doesn’t need HTTPS’s encryption, use HTTP instead.

This applies to individual components, too. If a component doesn’t need encryption, don’t use encryption. [I wonder how many companies are encrypting their logos?]

Avoid Redirects

Redirects occur when a page or component is specified by an outdated URL. The client sends a request to the server. The server sees that the resource has been moved. Rather than returning the resource from its new location, the server sends a response back to the client to tell it where the resource is now located. Having learned that the page or component has been moved, the client then starts the whole process all over again by issuing yet another request.

If this situation is caused by a URL written into one of our web pages, we must take responsibility for the performance degradation that results from the extra request. We could have resolved the problem when we moved the component to its new location by correcting the URL in all our web pages at that time. Clearly there is no excuse for the extra request.

We can’t stop users from typing a URL into the location field of the browser, but we may be able to send them to the correct place with an internal rewrite instead of an external redirect. The server fixes the problem instead of telling the browser to. The extra trip is thereby avoided.

Some websites make a point of listing their public-access URL’s and then never changing them. This gives the users a consistent list of entry points into the system. The designers then make sure those URL’s are always valid, so well-behaved users never suffer from redirects.

Add the Backslash

It’s simple, and it’s been said often enough: When we specify a directory’s URL, we should always append the backslash. If we don’t, the server has to fix the problem or ask the browser to fix it. The latter is much more costly than the former, but both are unnecessary. All we have to do is always add the backslash to a directory’s URL.

References

Analysis of HTTP Performance Problems by Simon E Spero. Republished 1995.06.02 by ibiblio at https://www.ibiblio.org/mdma-release/http-prob.html. Accessed 2012.01.25. This article is ancient by Internet standards. It is included here because it helps us understand foundational concepts that are still relevant today.

Best Practices for Speeding Up Your Web Site by Yahoo’s Exceptional Performance Team.  Published by Yahoo at developer.yahoo.com/performance/rules.html.  Accessed 2012.01.13.

Diagnosing Slow Web Servers with Time to First Byte by Andy King.  Published 2011.12.10 by Website Optimization, LLC at websiteoptimization.com/speed/tweak/time-to-first-byte.  Accessed 2012.01.13.

High Performance Web Sites – 14 Rules for Faster-Loading Web Sites by Steve Souders.  Published by Steve Souders at SteveSouders.com/hpws/rules.php.  Accessed 2012.01.13.

Minimize Request Overhead. Published by Google at https://code.google.com/speed/page-speed/docs/request.html.  Accessed 2012.01.21.

Minimize Round-Trip Times.  Published by Google at code.google.com/speed/page-speed/docs/rtt.html.  Accessed 2012.01.13.

Paid Monitor Free Page Load Testing Tool.  Published by Paid Monitor at pageload.monitor.us.  Accessed 2012.01.13.

Web Performance Best Practices.  Published by Google at code.google.com/speed/page-speed/docs/rules_intro.html.  Accessed 2012.01.13.

Website Performance: Taxonomy of Tips by Warren Gaebel.  Published 2011.12.29 by Paid Monitor at blog.monitor.us/2011/12/website-performance-taxonomy-of-tips.  Accessed 2012.01.13.

Try Paid Monitor For Free.  A 15-day free trial.  Your opportunity to see how easy it is to use the Paid Monitor cloud-based monitoring system.  Credit card not required.  Accessed 2012.01.13.

The Paid Monitor Exchange at GitHub.  This is the official repository for scripts, plugins, and SDKs that make it a breeze to use the Paid Monitor system to its full potential.  Accessed 2012.01.13.

Post Tagged with

About Warren Gaebel

Warren wrote his first computer program in 1970 (yes, it was Fortran).  He earned his Bachelor of Arts degree from the University of Waterloo and his Bachelor of Computer Science degree at the University of Windsor.  After a few years at IBM, he worked on a Master of Mathematics (Computer Science) degree at the University of Waterloo.  He decided to stay home to take care of his newborn son rather than complete that degree.  That decision cost him his career, but he would gladly make the same decision again. Warren is now retired, but he finds it hard to do nothing, so he writes web performance articles for the Monitor.Us blog.  Life is good!