Everything about Web and Network Monitoring

The Chronology of a Click, Part IV

Welcome to Part IV of Chronology of a Click.This series chronologically details what happens after a user clicks on a link in a web page. It is interspersed with website performance tips that relate to the topic being discussed. It is presented in multiple parts:

The following looks at the activity from the OSI model’s viewpoint because that model best illustrates the chronological sequence we are illustrating in this article. Note that your operating system probably does not follow the OSI model exactly. This may result in minor reordering, combining, or splitting of activities, but not enough to make a huge difference to the discussion.

Before we get into the details, though, here are a few performance considerations that apply to the entire process.

Performance Consideration: As with every machine involved in this process, the machine that hosts the web server must be configured to take advantage of whatever resources it has. How to configure the protocol stack varies by operating system and a full discussion is beyond the scope of this article. The best advice I can offer here is to assume that the default configuration is not suitable.
Performance Consideration: If the entire HTTP request can fit into one frame, it will go through the Internet and up the server’s protocol stack once. If the HTTP request requires multiple frames, it will go through the Internet multiple times. If it requires multiple datagrams or multiple TCP segments, it will go up the server’s protocol stack multiple times. Here are some tips to help keep HTTP requests small enough for a single frame:
  • Keep domain names, paths, and file names short.
  • Minimize the amount of GET and POST data.
  • If possible, use server-side session data instead of GET or POST data.
  • Use server-side session data for server-side processing and client-side session data for client-side processing.
  • Avoid cookies. Use client-side local storage instead.
  • Include only those header lines that really matter.

As you would expect, these tips will help in other areas, too, not just in the time spent negotiating the server’s protocol stack.

Performance Consideration: Measure, benchmark, and monitor. This age-old mantra is as valid today as it was in the beginning. The most important measurement is overall page load time, which includes all the different steps described in this article. However, monitoring the individual parts (like the individual parts of this article) can give advance warning of trouble that’s brewing and can help locate the source of a performance problem. Would it come as any surprise if I were to suggest the free monitoring service available at Monitor.Us? You were probably expecting that, weren’t you?
Performance Consideration: Monitor available memory, CPU utilization, packet loss ratio, free space on the disk, and the number of unused connections. There shouldn’t be any paging or swapping, so use a monitor to watch for it.

The Physical Layer

Performance Consideration: Add more memory, disk space, and processors. You can never have enough. [Well, that’s not quite true, but you get the idea.]

The Data Link Layer

When bits and bytes start arriving at the physical layer, the data link layer jumps into action. It removes the bits and bytes and reassembles them into frames. It checks for a few low-level errors and asks its neighbour to retransmit anything that seems to be corrupted.The data link layer also reassembles the frames into IP datagrams, which it passes up to the network layer.

The Network Layer

Upon receipt of the IP datagram, the network layer checks the destination IP address. In part III, the network layer of the intermediate machines found that the destination IP address was different from their own, so they sent the datagram to one of the neighbouring machines.In the present case, though, the network layer sees that the IP address is the same as this machine’s IP address, so it does not route it to a neighbouring machine. Instead, it extracts the TCP segment from the IP datagram and passes it up the protocol stack to the transport layer.

The Transport Layer

The transport layer is a busy little boy.It first lets the client know that the TCP segment has been received. However, if the segment was corrupted enroute (remember the checksum?), it instead asks the client to retransmit the segment. If the segment was corrupted, the transport layer now aborts processing. In either case, it’s down the protocol stack, through the Internet, and up the client machine’s protocol stack to its transport layer.If the TCP segment was not corrupted, the transport layer checks to see if it was previously received. If it was, this new one is discarded and further processing is aborted.Since the transport layer is responsible for in-order delivery of TCP segments, it now looks at the sequence number. [Part II told about the client machine’s transport layer putting the sequence number into the TCP segment.] If this is the next sequence number in order, it unpackages the HTTP and passes it up to the session layer. However, if this segment is arriving earlier than some of its preceding segments, the transport layer hangs onto it while waiting for the missing segments.If this TCP segment is one that another TCP segment was waiting for, the transport layer now unpackages both (in the right order, of course) and sends them up to the session layer.

Performance Consideration: It becomes apparent that lost TCP segments (often called lost packets) slows everything down. Eventually, the client side’s transport layer will notice that a segment wasn’t acknowledged, so it will resend that segment, but in the meantime, the server’s transport layer is holding back all the other segments that are waiting for the missing one.
Monitoring the lost packet ratio is important. If it suddenly spikes, this is an indication that something is going wrong. Your users may not notice anything more than a slight drop in performance, so you have an opportunity to fix a problem before it garners attention.The net result of the transport layer’s efforts: The HTTP is passed to the session layer in the correct order with no duplicates, nothing missing, and nothing corrupted.

The Session Layer

The session layer opens, closes, and manages connections. However, it does not close the connection at this point because the request has not been filled yet. The server still needs to send the response to the client, so the connection remains open for that purpose. See parts V and VII of this article (coming soon to the Monitor.Us blog near you) to find out how the connection gets closed.
Performance Consideration: SSL encryption should be avoided whenever possible because it adds additional back-and-forth handshaking. Each trip requires another trip down a protocol stack, through the Internet, and back up a protocol stack. Use SSL only if posting confidential form data. [Remember, we’re talking about the request here, not the response.]
Performance Consideration: There is one glaring performance problem right here in the session layer:
  • Pipelining allows requests to be submitted to the server before the server has finished serving prior requests, but it is disabled by default in most browsers. Even if pipelining is turned on, HTTP forces strict first-in, first-out processing. Connection sharing is good, but serving requests in parallel with other requests would be so-o-o much better.

SPDY is a new application layer protocol that offers true pipelining. If you’re using Google Chrome, you may already be using SPDY. You’ll be hearing more about this in the not-too-distant future.

The session layer passes the HTTP request up to the presentation layer.

The Presentation Layer

The presentation layer is the translation layer. It converts between text formats and image formats so unlike machines can work with each other. It compresses and decompresses. It encrypts and decrypts.In this case, the presentation layer doesn’t have a lot of work to do. HTTP requests are not often compressed or encrypted, so there is seldom a need to decompress or decrypt. They aren’t images, so image translation won’t be necessary either.An HTTP POST request contains a body, which the client’s presentation layer may have encrypted or compressed. In these cases, the server’s presentation layer decrypts and/or decompresses.
Performance Consideration: SSL creates more work for the presentation layer, too. Use SSL only if posting confidential form data. [Remember, we’re talking about the request here, not the response.]
Peeking Into the Future: The ultimate privacy is to encrypt everything that travels through the Internet. As demand for ultimate privacy increases, the inability to encrypt HTTP headers becomes more of an issue. SPDY (mentioned above) encrypts everything, including headers. Keep your eyes open – it’s coming.
When finished, the presentation layer sends the HTTP request up the protocol stack to the application layer.

The Application Layer

The application layer receives the HTTP request and passes it to the web server software (e.g., Apache) in a format the web server understands.

The Web Server

The web server now knows which resource is being requested, so it … (continued in part V; keep your eyes on the Monitor.Us blog).

Conclusion

Just as the HTTP request had to go down the OSI protocol stack on the client’s machine, it has to come up the protocol stack on the server’s machine. Each layer has its own job to do, and each layer can be configured well or poorly for its task. The machine’s resources and its overall configuration will also impact performance.

Post Tagged with

About Warren Gaebel

Warren wrote his first computer program in 1970 (yes, it was Fortran).  He earned his Bachelor of Arts degree from the University of Waterloo and his Bachelor of Computer Science degree at the University of Windsor.  After a few years at IBM, he worked on a Master of Mathematics (Computer Science) degree at the University of Waterloo.  He decided to stay home to take care of his newborn son rather than complete that degree.  That decision cost him his career, but he would gladly make the same decision again. Warren is now retired, but he finds it hard to do nothing, so he writes web performance articles for the Monitor.Us blog.  Life is good!