Everything about Web and Network Monitoring

The Chronology of a Click, Part IX

At the end of Part VIII the user’s request had been served, but had only made it to the bottom of the protocol stack in the server machine. This part describes the next step, the journey through the Internet from the server machine to the client machine.Part III described this journey in the opposite direction. The request was travelling from the client to the server in that part. In this part, the response is travelling from the server back to the client.

Admittedly, this is not much different from what was described in Part III, so this part is much shorter than the others. The frames are passed from machine to machine through the local loop, then the intranet, then the Internet until they reach the client machine. Routing is the same.

The only real difference is what the caching servers do along the way. During the client-to-server journey they looked in their cache to see if they could serve the resource without bothering the web server. During the present server-to-client journey, they add the newly-retrieved resource to the cache (if it’s cacheable, of course).

If the current resource is too big, the caching server will just pass it along to the next machine without adding it to the cache. The definition of “too big” is configurable. Note that it has nothing to do with the size of the cache or how full it is. It is simply a count of the number of bytes in the current resource. If the current resource is bigger than the configured maximum, it will not be cached. [Cisco Example]

Performance Consideration: 
Do not maximize nor minimize the maximum number of bytes. Either way yields sub-optimal performance. This is one of those fiddle-with-it settings, so fiddle away until you get the best results. [The fiddling has to be done in production, not in the test environment.] Keep in mind that things change over time. Like most of the fiddle-with-it settings, we need to re-evaluate them from time to time and monitor them on an ongoing basis.
If the caching server’s cache is already full, the least-recently used entries will be deleted until there is enough room to store the new entry.

I would expect the caching server retransmits the frames to the next machine before adding them to the cache. To do the opposite would be foolishness from a performance standpoint. From a webapp developer’s viewpoint, though, this is almost irrelevant because that in-between machine is probably not within our control. We control the server and maybe have some degree of control over the client, but all those in-between machines out there on the Internet are not ours.

SPDY may offer hope for reduced latency, but we should note that caching servers are excluded from the process. SPDY compresses and encrypts the headers, then sends multiple requests over a single connection. The in-between caching servers cannot read the headers, so they are denied the opportunity to speed up the response. I expect SPDY will find a way around this soon. Perhaps they already have.

And now the resource arrives at the client machine… (to be continued)

For quick reference, here is the series’ table of contents:

 
Post Tagged with

About Warren Gaebel

Warren wrote his first computer program in 1970 (yes, it was Fortran).  He earned his Bachelor of Arts degree from the University of Waterloo and his Bachelor of Computer Science degree at the University of Windsor.  After a few years at IBM, he worked on a Master of Mathematics (Computer Science) degree at the University of Waterloo.  He decided to stay home to take care of his newborn son rather than complete that degree.  That decision cost him his career, but he would gladly make the same decision again. Warren is now retired, but he finds it hard to do nothing, so he writes web performance articles for the Monitor.Us blog.  Life is good!
  • https://twitter.com/squarecatben Ben

    testing 1, 2, 3…