- Part I – an overview of the entire process from beginning to end
- Part II – down the protocol stack (client side)
- Part III – the journey from client to server
- Part IV (this one) – up the protocol stack (server side)
- Part V – the web server (software)
- Part VI – the server side script
- Part VII – the database management system
- Part VIII – down the protocol stack (server side)
- Part IX – the journey from server to client
- Part X – up the protocol stack (client side)
- Part XI – the client-side script
- Part XII – the Document Object Model
- Part XIII – after the document is complete
- Part XIV – parallelism
- Part XV – wrap-up; best practices
The following looks at the activity from the OSI model’s viewpoint because that model best illustrates the chronological sequence we are illustrating in this article. Note that your operating system probably does not follow the OSI model exactly. This may result in minor reordering, combining, or splitting of activities, but not enough to make a huge difference to the discussion.
Before we get into the details, though, here are a few performance considerations that apply to the entire process.
- Keep domain names, paths, and file names short.
- Minimize the amount of GET and POST data.
- If possible, use server-side session data instead of GET or POST data.
- Use server-side session data for server-side processing and client-side session data for client-side processing.
- Avoid cookies. Use client-side local storage instead.
- Include only those header lines that really matter.
As you would expect, these tips will help in other areas, too, not just in the time spent negotiating the server’s protocol stack.
The Physical Layer
The Data Link Layer
The Network Layer
The Transport Layer
If the TCP segment was not corrupted, the transport layer checks to see if it was previously received. If it was, this new one is discarded and further processing is aborted.
Since the transport layer is responsible for in-order delivery of TCP segments, it now looks at the sequence number. [Part II told about the client machine's transport layer putting the sequence number into the TCP segment.] If this is the next sequence number in order, it unpackages the HTTP and passes it up to the session layer. However, if this segment is arriving earlier than some of its preceding segments, the transport layer hangs onto it while waiting for the missing segments.
If this TCP segment is one that another TCP segment was waiting for, the transport layer now unpackages both (in the right order, of course) and sends them up to the session layer.
The Session Layer
- Pipelining allows requests to be submitted to the server before the server has finished serving prior requests, but it is disabled by default in most browsers. Even if pipelining is turned on, HTTP forces strict first-in, first-out processing. Connection sharing is good, but serving requests in parallel with other requests would be so-o-o much better.
SPDY is a new application layer protocol that offers true pipelining. If you’re using Google Chrome, you may already be using SPDY. You’ll be hearing more about this in the not-too-distant future.
The session layer passes the HTTP request up to the presentation layer.
The Presentation Layer
An HTTP POST request contains a body, which the client’s presentation layer may have encrypted or compressed. In these cases, the server’s presentation layer decrypts and/or decompresses.
The Application Layer
The Web Server
The web server now knows which resource is being requested, so it … (continued in part V; keep your eyes on the Monitor.Us blog).
Just as the HTTP request had to go down the OSI protocol stack on the client’s machine, it has to come up the protocol stack on the server’s machine. Each layer has its own job to do, and each layer can be configured well or poorly for its task. The machine’s resources and its overall configuration will also impact performance.