The most noticeable difference between this trip up the protocol stack, as compared to Part IV’s trip up the protocol stack, is that we are typically dealing with many more frames, datagrams, and segments. Part IV was the delivery of the request (small); this part is the delivery of the resource (big). This means extra work at every layer of the protocol stack. Not necessarily different work, just more of it.
SIZE MATTERS! Big means increased latency. Small means faster. Splitting large components into multiple small components can yield better performance. However, because this approach increases the number of components and therefore the number of connections, it can actually worsen performance. The only way to know for sure is to try.
The physical layer, the data link layer, the network layer, the transport layer, and the session layer are all the same as in Part IV, so let’s not go there again. We’ll start (and end) at the second last layer – the presentation layer.
The Presentation Layer
The Application Layer
For quick reference, here is the series’ table of contents:
- Part I – an overview of the entire process from beginning to end
- Part II – down the protocol stack (client side)
- Part III – the journey from client to server
- Part IV – up the protocol stack (server side)
- Part V – the web server (software)
- Part VI – the server side script
- Part VII – the database management system
- Part VIII – down the protocol stack (server side)
- Part IX – the journey from server to client
- Part X (this one) – up the protocol stack (client side)
- Part XI – the client-side script
- Part XII – the Document Object Model
- Part XIII – after the document is complete
- Part XIV – concurrency
- Part XV – wrap-up; best practices