Online education portals like Udacity and Coursera are really changing the world of remote learning in significant ways. By making free and high quality education accessible to a global audience, these platforms are opening up undreamt of possibilities for communities around the world to improve, grow, and prosper in the digital economy of the 21st century. Education at top tier colleges and universities has traditionally been a social and economic privilege, but now anyone can join in the learning revolution by sitting in virtual classrooms with the world’s best and brightest educators. Whether this involves learning how to code and build smart phone apps, or starting up a new business, or learning about public health literacy, the sky is the limit of what’s now possible.

Everything about Web and Network Monitoring

The Chronology of a Click, Part VIII

We’ve reached a milestone in this series. When the user clicked on that link way back at the beginning, it was a request for a web page. Parts II through VII saw the request being transported to the server and the server finding or building the web page (the response). Now, after all that, the response is ready to be delivered back to the user.When we left off in Part VII, the response was ready to go, but it was still in the server machine. Now it has to be delivered to the client and then transformed into visuals so the user can read it. This part details the trip down the server machine’s protocol stack.Do you remember Part II? It told us that the request was packaged into TCP segments, then the TCP segments were repackaged into IP datagrams, then the IP datagrams were repackaged into frames as the request worked its way down through the layers (application, presentation, session, transport, network, data link, and physical).A quick reminder before we get started: No operating system implements the OSI layering model perfectly, so the following functionality may be combined, split, or reordered on your server. However, those changes shouldn’t make a big difference to the discussion at hand, so this article presents the trip down the server’s protocol stack through the lens of the OSI layering model.

The Application Layer

The web-server software (i.e., the application) is ready to send the web page, but it needs to add some metadata first. This comes in the form of HTTP headers. The headers were discussed in Part II, so we won’t go into all that detail again. Instead, the next four sections will discuss the server-side, performance-boosting techniques that are implemented by the HTTP headers:

Persistent Connections

If connections are not persistent, they are closed after each resource is served. This means that a new connection has to be established and the slow start algorithm reinitiated for every resource that is requested. This can be a major performance problem.In HTTP 1.1, all connections are persistent until the client says otherwise. As described in Part II, the client sends a Connection: close header to indicate that the connection should be terminated after the requested resource is fully served. The server closes the connection after sending a Connection: closeheader in the last TCP segment sent to the client.HTTP 1.0 (the old standard) did not allow persistent connections. The rest of the world reacted by using the non-standard Connection: keep-alive header to request a persistent connection. It was so widely used that it could be considered a de facto standard. If you are using an HTTP 1.0 server or supporting HTTP 1.0 clients, you might have to know this. However, most everything is HTTP 1.1 nowadays.


The server tells the client and all the caching servers whether this response can be cached, and for how long. This information is conveyed in the HTTP headers.Although caching may partially work without all of the following headers, it is recommended that they all be included in every response sent by the server. This will avoid the confusion and additional processing that go along with using defaults. [Not to mention those pesky caching proxies that don’t quite behave the way you’d expect.]


  • Cache-Control:  As its name implies, this is the main header for controlling caches. It specifies whether the response should be cached, for how long, and by whom (client only, or client and proxies).



  • Date:  the date/time at which the server generated the response (i.e., now).



  • Last-Modified:  the date/time the resource was last modified.



  • Expires:  For HTTP 1.0 (the old HTTP), this sets the date/time at which the resource becomes stale.



  • Content-Length:  the number of bytes in the response’s body.


Missing headers can interfere with caching.
Performance Consideration: 
Use Cache-Control: max-age (for HTTP 1.1) and Expires (for HTTP 1.0) to set the resource’s expiry date as far into the future as the application allows. In most cases, even dynamic resources can be cached for a few seconds or minutes. If using both headers, make sure they indicate the same expiry time.
Performance Consideration: 
Since our performance is affected by every machine involved in the delivery process, we should also be considerate of the caching proxies’ performance needs. Not specifying the Content-Length impacts their performance. If this header is missing, the caching proxy caches the response, then, if the response is greater than the caching proxy’s maximum cacheable size, it has to uncache what it just cached. If, however, Content-Length is included, the caching proxy can see right up front that the response should not be cached, so it doesn’t have to cache then uncache.
After a resource expires, it may or may not still be valid, so the caching proxy has to ask the server. If the cached resource is no longer valid, the server will send the new version back to the caching proxy. If the cached resource is still valid, the server will reply with a 304 status code, which is its way of saying it’s okay to use the cached entry. This ask/answer process is called revalidation.
Performance Consideration: 
Use Cache-Control: must-revalidate to tell caching proxies that they should revalidate stale entries. This allows them to use stale cache entries until the resource is actually modified on the server.


Part II told us that clients transmit all the cookies that pertain to the resource’s directory, its ancestor directories, and its ancestor domains, whether they are needed or not. If we make excessive use of cookies, especially large cookies, we can expect a performance hit.
Performance Consideration: 
The easiest way to avoid cookies is to serve resources from an IP address rather than from a domain name. Note that this also eliminates DNS lookups on the client side. [This probably won’t work with hosting service providers (HSPs) if the IP address is shared with other domains. There are workarounds, but most HSP’s probably won’t go for it.]
Performance Consideration: 
Avoid cookies. Use local storage and session storage instead. If you absolutely must use cookies, serve the cookie-monster resources from one domain and the cookieless resources from a different, cookieless domain. Never set a cookie on that second domain!
If the application is using cookies, the server can use the Set-Cookie header to create, modify, or delete them. [JavaScript can do this, too.]


Look out HTTP. There’s a new kid on the block. His name is SPDY(pronounced “speedy”). And look what he can do:Multiplexing:  When HTTP receives a response, it sends the next request. Multiplexing is permitted on a very limited scale with images and a few other resources, but full-on, total, unrestrained multiplexing just isn’t in the cards.
SPDY sends all requests at the beginning, then receives one response after another over a single connection. The difference here is that time-consuming requests don’t make the others wait.Prioritization:  SPDY lets the client prioritize the requests so the server can work on them in the “correct” order.Compression:  HTTP can compress response bodies, but not the headers. SPDY compresses everything that can be compressed, including the headers.Encryption:  HTTP can encrypt the body upon request, but not the headers. SPDY encrypts both body and headers by default. Because HTTP uses multiple connections, the expensive SSL/TLS handshaking must be repeated once for each connection. Because SPDY uses one connection, the handshaking happens just once.HTTP Optimization:  HTTP transmits whatever HTML the developer specified. SPDY parses and optimizes it first.

Pushing & Hinting:  When the server gets certain requests, it knows what requests will follow. With SPDY, the server can send some responses without waiting for the request to arrive. Before they ask, he answers.

The server can also tell the client what resources to download, then wait for the client to request them. This process is called hinting.

Wow! Gimmee Some of That:  Stop drooling. Life with SPDY is not all wine and roses:

  • It’s still in its infancy. Like all new technologies, there will be a breaking-in period. It’s nice to be on the leading edge, but not the “bleeding edge.”
  • Both the browser and the server must have SPDY installed. Currently, at least two popular browsers have it installed by default, which ain’t too bad. Unfortunately, very few servers have it installed. If your webapp uses a hosting service, you can’t use SPDY until they get around to installing it.
  • Actual measurements show some performance improvement, but not to the degree that was expected. [It has been suggested that some of those web performance tips we’ve dutifully implemented on our websites may be interfering with SPDY’s efforts.]
  • According to Poul-Henning Kamp, security and the impact on routers may need further work.

Nevertheless, SPDY is well worth keeping an eye on. It may well be the direction we’re headed.

The Presentation Layer

The presentation layer compresses and encrypts. These are described in the following two sections.


The client tells the server which forms of compression it understands by including a list of compression methods in the Accept-Encoding header. The server uses the Content-Encodingheader to tell the client which form of compression it actually used.The Gnu zip (gzip, RFC 1952) and the zlib/deflate (deflate, RFC 1950) compression formats are most widely used, but others are also available.
Performance Consideration: 
Transmitting fewer bytes across a network, an intranet, or the Internet improves performance. Even considering the extra cost in processing time on both ends, compression is always a good idea (assuming a reasonable compression ratio, of course).


HTTPS encryption, SSL/TLS encryption, and SPDY encryption are three different things. SSL/TLS and SPDY encrypt both body and headers. Even though HTTPS is often described as HTTP + SSL/TLS, there is one important difference: HTTPS encryption encrypts the body, but not the headers.The client tells the server what encryption algorithms it understands and the server selects and uses one of those algorithms. For HTTPS encryption, existing browsers have built-in lists (perhaps configurable, perhaps extensible) of the encryption algorithms they understand. On the server side, Apache can be configured for encryption.
Performance Consideration: 
If headers are encrypted, proxy caching is a thing of the past. The in-between machines can’t see the headers (e.g., GET and Cache-Control) so they can’t cache the result. This problem is easily solved, so I expect SPDY will come out with a solution soon – perhaps they already have.

The Session Layer

The session layer establishes, manages, and terminates connections. In the present case, the connection was initiated by the client’s session layer, but the server’s session layer cooperated in that process.Connections are persistent or non-persistent, as specified in the HTTP headers (see above). They are closed at the client’s request. The actual connection termination (or non-termination) is handled here in the session layer.The server’s session layer also imposes a maximum on the number of connections to/from every client.The domain name system does not come into play here because the connection was established by the client. That initial request included the IP address, so there’s no need to look it up again.

The Transport Layer

The Network Layer

The Data Link Layer

The Physical Layer


The trip down the protocol stack is very similar to what was described in Part IIexcept:

  • The client initiates the connection. The server closes it.
  • The client transmits existing cookies. The server transmits new and modified cookies.
  • The client tells what compression algorithms it understands. The server selects and uses one of them. The server compresses. The client uncompresses.
  • The client tells what encryption algorithms it understands. The server selects and uses one of them. The server encrypts. The client decrypts.

And then there’s SPDY, which seems poised to make a major difference to much of the above.

For quick reference, here is the series’ table of contents:

Post Tagged with

About Warren Gaebel

Warren wrote his first computer program in 1970 (yes, it was Fortran).  He earned his Bachelor of Arts degree from the University of Waterloo and his Bachelor of Computer Science degree at the University of Windsor.  After a few years at IBM, he worked on a Master of Mathematics (Computer Science) degree at the University of Waterloo.  He decided to stay home to take care of his newborn son rather than complete that degree.  That decision cost him his career, but he would gladly make the same decision again. Warren is now retired, but he finds it hard to do nothing, so he writes web performance articles for the Monitor.Us blog.  Life is good!