Everything about Web and Network Monitoring

The Chronology of a Click, Part VII

The Service Provider

The server-side script may request a service from some service provider, and often does so, sometimes more than once in a single script. The service can be anything, limited only by our imaginations.Sending the request to the service provider is the first step.
Performance Consideration: 
This need not be the first step. In fact, the entire process can be eliminated (if our application’s design permits it) by caching or memoization within the script. Why even send a request for informaton if the same request was sent awhile ago? Why not just use those results? The only interesting point of discussion is how long awhile can be.
Performance Consideration: 
Installing the service provider on the same machine as the script processor may yield the best performance because we avoid all networking. However, this approach can also interfere with the operation of the web server software, so it may not be the best choice. Only after-the-fact measurements can confirm which approach is better, but monitoring the load on the web server can give us a clue. If the web server is already near its limit, it may be best to install the service provider elsewhere.
Where to install it? The choices, in order from best average performance to worst average performance is:

  1. on the same machine as the script processor
  2. on a machine on the same local loop as the script processor
  3. on a machine on the same network as the script processor
  4. on a machine within our intranet
  5. on a machine somewhere out there on the Internet

The service provider then starts up and configures itself (if it hasn’t already).

Performance Consideration: 
The service provider shouldn’t be starting up and configuring itself at this point. Why? Because the user is still drumming his fingers on the keyboard, waiting for a response. Startup and configuration should have happened before the request arrived. If the service provider doesn’t do this, consider other alternatives. If the service provider does have this capability, make sure it’s set up to configure itself before requests start arriving.
The service provider does its business. This is far too general for this article, so we’ll discuss an example (a database management system) below.The service provider sends its reply back to the script.
Performance Consideration: 
The proximity of the service provider to the machine running the script is once again important.
Performance Consideration: 
Compressing the data as it travels back to the script may or may not yield better performance. Measure, measure, measure. Compression is more for data travelling over a network than for data that is passed from process to process on a local machine. We also note that some data is more compressible than other data.

A Service Provider Example – a DBMS

There is one service that is used more often than any other – a database management system (DBMS). Let’s look a little more closely at what it is and how it operates. More importantly, let’s look at what we can do to make it faster.A DBMS’s raison d’être is twofold: to store data and to retrieve data. It usually deals with massive amounts of data because simpler options are available for small quantities of data.
Performance Consideration: 
DBMS’s may not offer the best performance for small, unrelated data sets. Flat files, CSV’s, .ini files, and other formats may be better choices.
Where and how data is stored can affect performance dramatically.
Performance Consideration: 
Locally-stored data can be accessed faster than data stored on another machine, but this approach is not scalable. As data grows larger and larger, distributing it across multiple machines will allow queries to execute faster.
A well-resourced machine, no matter how massive and fast it may be, still has its limits. Multiple machines, each holding the portion of the database that it can handle, has no practical limit. Just keep monitoring it and add machines as necessary. This is one of the keys to scalability. [Notice how scalability is different from performance?]
Performance Consideration: 
Data stored in memory can be accessed faster than data stored on a disk, but it vanishes into the ether when the machine crashes or is turned off.
Performance Consideration: 
Solid-state drives are faster than hard-disk drives.
Performance Consideration: 
Partitioning and clustering may help. The only way to find out for sure is (you guessed it) to try it out.
DBMS’s typically employ several techniques to make things faster or easier to tune. Examples: caching, lookaheads, indexing, data distribution, load balancing, partitioning, clustering, offline/background processing, multiple storage engines with various characteristics, built-in instrumentation, metadata querying, various data manipulation techniques, and concurrent processing.
Performance Consideration: 
Know your DBMS’ techniques. Employ the ones that make a difference. See the MySql section below for more performance considerations along this vein.
Relational databases (RDBMSs) are more common than any other. Data is stored in tables (rows and columns) and the relationships between rows are represented by foreign keys. Data is usually manipulated through SQL, but other languages can also be used.
Performance Consideration: 
RDBMSs almost always employ indexing. Indexing is one of the most impactful performance tools out there, but developers often use it poorly. Too much can be just as bad as not enough. Column choices may be inappropriate. Existing indexes may not be used. Tables may be scanned from beginning to end because an index wasn’t created. Etc.
Remember: Creating an index may speed up reads, but will slow down writes. The table’s read/write ratio can provide a hint for indexing.However, NoSQL is making its presence known. It stores data as key/value pairs. It is widely believed that NoSQL will offer better performance than relational databases in the general case.
Performance Consideration: 
Keep your ear to the ground. This may be a future direction.

A DBMS Example – MySql

One DBMS, MySql, is used more often than any other, so let’s look a little more closely at it and its performance tips.

Server Installation & Configuration

Server installation and configuration happens long before the request arrives, so in a sense, this section doesn’t belong here in The Chronology of a Click. However, since installation and configuration can dramatically affect the DBMS’ performance at execution time, it is well worth mentioning.Configuration at installation time affects the entire server and every database that will eventually be managed by it. Some settings can be changed dynamically when the query is processed.
Performance Consideration: 
Provide a generous supply of resources, especially RAM, hard disk space, CPUs (both speed and number), and fast network connections. Configure the operating system and MySql to fully use those resources, but without overusing them. Multiple machines may be a wiser investment than building a powerhouse machine, and it improves scalability options. Use a 64-bit architecture instead of 32.
Performance Consideration: 
Anything else installed on this machine will compete for available resources, even if it’s not running. Get rid of it. The windowing system, for example, is not necessary and it competes heavily for CPU cycles and disk space.
Performance Consideration: 
Swapping is never allowed on a server. Reconfigure or distribute the database across multiple machines to eliminate swapping.
Performance Consideration: 
Revisit this step during and after database installation. Until a database is installed and operational, all settings are merely educated guesses.

Database Design, Installation & Configuration

Once the MySql server is installed, we can go ahead and create our databases. The first step is the design.
Performance Consideration: 
Normalize first. Denormalize only when absolutely necessary. Redundancy cannot be fully eliminated, but it should be at its minimum. Note that table joins can be faster than using a denormalized design.
Performance Consideration: 
Avoid triggers. If you can’t, make sure they are short and fast.
Then comes the implementation.
Performance Consideration: 
The database may be small now, but it’s hard to predict how big it may eventually become. Plan for partitioning and distributing, even if it’s not needed yet.
Performance Consideration: 
For larger databases, use partitions, clustering, multiple disks, RAID, distributed databases, load balancing, or a combination thereof.
Performance Consideration: 
Use autoincremented INTs for the primary keys whenever possible.
Performance Consideration: 
Compress that which can be compressed. TEXTs and BLOBs are especially good candidates.
Performance Consideration: 
Measure; never assume.
Performance Consideration: 
Execute ongoing maintenance and admin tasks in background processes that run at a low priority when the database is at its quietest.
Performance Consideration: 
Use PROCEDURE ANALYSE to get data type recommendations. It’s not useful when you’re first creating the database, but it’s quite handy when the database is fully populated.
Performance Consideration: 
Use consistent data types to avoid type conversions when comparing or joining.

Storage Engines

MySql provides several storage engines, each with its own performance characteristics. All but one are implemented on disk. The other one is implemented in memory, so it’s faster, but all the data disappears when the machine crashes.

The ARCHIVE storage engine is great for logs. The data is compressed, and it can’t be changed after it’s INSERTed. Note that indexes aren’t allowed, which makes the writes faster, but SELECTs can be very slow if the table becomes too large. [Solution: Rotate the tables frequently or when they reach a certain size.]

Performance Consideration: 
Use InnoDB rather than MyIsam. It’s faster, especially for large databases with lots of foreign keys. Also note that MyIsam cannot handle transactions.
Performance Consideration: 
Masters and slaves don’t have to use the same storage engine. Use whatever performs best.

Configuration Checklist

Performance Consideration: 
Whether configuring the MySql server or a database within it, note that the default values for the following may not be appropriate for your system. I could write an article for each one, so at this time, let’s just say that we should not blindly accept these defaults, but should give each one some consideration. Look these up at https://dev.mysql.com/doc/refman/5.6/en/dynindex-sysvar.html
  • innodb_additional_mem_pool_size
  • innodb_buffer_pool_size
  • innodb_file_io_threads
  • innodb_flush_commit
  • innodb_flush_log_at_trx_commit
  • innodb_lock_wait_timeout
  • innodb_log_buffer_size
  • innodb_log_file_size
  • join_buffer_size
  • key_buffer
  • key_buffer_size
  • max_allowed_packet
  • max_connections
  • max_heap_table_size
  • myisam_sort_buffer_size
  • net_buffer_length
  • open_files_limit
  • performance_schema
  • performance_schema_events_waits_history_long_size
  • performance_schema_events_waits_history_size
  • performance_schema_instrument
  • query_cache_size
  • safe-show-database
  • sort_buffer_size
  • table_cache
  • table_definition_cache
  • table_open_cache
  • thread_cache
  • thread_cache_size
  • tmp_table_size

Writing & Testing Queries

Now that the database is installed and configured, we can start using it. We write queries, test them, and embed them in our server-side scripts.

Performance Consideration: 
Note the part about testing the queries. Start by doubling your testing budget, then double it again. Now you’re getting close.
Performance Consideration: 
Become intimate with EXPLAIN. Know what it does and how it does it. Most importantly, understand its output. This information will identify problems before you run the query the first time. Use it every time you modify a query.
Performance Consideration: 
Some people say to wait until later on in the process to optimize queries. I don’t think I can agree with that advice because “later on” never seems to arrive. Besides, continual nitpicking can develop habits that every developer should have. [Feel free to disagree with me on this point. It’s opinion, not fact.]

It has been said and oft repeated that performance can be more dramatically affected by query and index design than by anything else. I don’t know that that is always true, but I’m sure it is more often true than false.

Performance Consideration: 
Transport as little data as possible between the database management system and the server-side script. In other words, fetch only what you need.

The biggest offender in this regard is SELECT *, which transports every column of the table. Even if the script truly requires every column, we don’t know what additional columns will be added in the future. When new columns are added, the script will become slower because it is now transporting more data. The asterisk is evil – always list columns by name.

Paging data is now common. Instead of transporting all the data, scripts commonly transport one page at a time. We hope the user will find what he needs with relatively few page accesses, which means less data is transported in total. Note that we’re discussing performance in this article, but usability and the user experience may also need to be assessed before using this technique.

As pointed out in tip #7 in Baron Schwartz’ May 2012 article, data paging can be implemented poorly. He suggests an alternate implementation in a mere three paragraphs, so please take a look at it. [As always, measure the alternatives rather than assuming one is better than the others.]

Performance Consideration: 
Two performance tips are direct opposites, but both are valid:
  1. Replace multiple small queries with one large query.
  2. Split large queries into multiple small queries.

Both are valid, but not always. The trick is to find out which one, or neither, to use in a given situation. So we’re back to measuring the alternatives again.

Performance Consideration: 
Batching INSERTs, REPLACEs, UPDATEs, and DELETEs for offline processing may make sense in some situations. However, batching them on disk may be slower and batching them in memory makes them vulnerable to a crash. To batch or not to batch, and where to batch, must be decided on a case-by-case basis.
Performance Consideration: 
Correlated subqueries can be performance nightmares. A JOIN can do the same thing, perhaps faster.

Indexing

Indexes make data retrieval faster. Without indexes, the database management system would have to scan every table from beginning to end for every query, making performance unbearable for medium and large databases.

Building appropriate indexes is both an art and a science. An index can make data retrieval faster, but it also makes the updating process slower. Indexes need to be updated whenever the underlying data is updated. Building an index on every column is just as foolish as building no indexes.

Performance Consideration: 
Start by building indexes on primary keys, foreign keys, and columns used in conditions. Then analyze the results and tweak as needed. The slow query log and EXPLAIN will come in handy here.

After some time in production, remove indexes that are never used.

Performance Consideration: 
Keep primary keys small, especially if InnoDB is being used. InnoDB includes the primary key in every index, like it or not.
Performance Consideration: 
Because data patterns may change over time, index performance should be continually monitored to see if a different set of indexes might now perform better.

Processing Requests

MySql can check for IP spoofing. It resolves the IP address to a host name, then resolves the host name back to an IP address. If this IP address is the same as what it started with, everything’s hunky-dory.

If parts of the website are secured by IP address, this check is important.

Performance Consideration: 
If it’s not needed, turn name resolution off with –skip-name-resolve. This will save CPU cycles and network time. If it is necessary, make sure it’s optimized.

Modern DBMS’s can hold much, much more data than most applications need.

Performance Consideration: 
However, this slows down all queries. If old data is not likely to be needed, archive it and remove it from the active table.

The database management system goes beyond merely getting and updating data. It also protects the data from corruption that results from concurrent updates. Queries lock the data, do what they came to do, then unlock the data. While one query holds a lock, no other query can access the same data.

Locks can create race conditions – two queries can both hold a lock to the same data, with both queries waiting forever for the other one to release the lock. When this happens, MySql steps in and sends an error to one of the queries, so all queries must be ready to roll back a transaction at any point. [This is one reason why a DBMS needs to support transaction handling.]

Performance Consideration: 
The smaller the locked block of data, the fewer delays for other queries. Row-level locking provides better performance than table-level locking. Fine locking granularity is one reason InnoDB is faster than MyIsam.
Performance Consideration: 
Shared (read) locks interfere with performance less than exclusive (write) locks because a read lock allows other queries to read at the same time and a write lock blocks all other queries.

The Query Cache

Just as its name implies, MySql’s Query Cache caches the result set of MySql queries. It can be turned off for all queries, turned on for all queries, or used on demand.

Performance Consideration: 
Use query_cache_type = 2 for on-demand query caching. Turning it on for all queries (option 1) can result in poorer overall performance than turning it off for all queries (option 0). Use SQL_CACHE in the SELECT statements that should be cached.

The query cache can be configured:

  • query_cache_type- turns the query cache off, on for all except SQL_NO_CACHE, or on only if SQL_CACHE is specified
  • query_cache_size- amount of memory allocated for the query cache
  • –maximum-query_cache_size – stops sessions from increasing query_cache_sizebeyond this value
  • query_cache_limit- maximum allowable size of a cached result set
  • query_cache_min_res_unit- minimum block size for cache storage
  • query_cache_wlock_invalidate – invalidates cached queries immediately when a process requests a write lock
Performance Consideration: 
Turning caching off is a two-step process: set query_cache_type to 0 and set query_cache_size to 0. Setting one without the other doesn’t make any sense and it can reduce performance.
Performance Consideration: 
Setting query_cache_type to 1 (ON) may make performance worse than setting it to 0 (OFF). Recommended: 2 (DEMAND).
Setting it to 2 (DEMAND) allows the developer to decide which queries should be cached. So which ones should we cache? The ones that are executed most often is probably the best starting point, but there’s room for some tweaking here.
Performance Consideration: 
These two conflicting tips:
  • Set the query_cache_limitlow to keep the query cache from filling up quickly with large result sets.
  • Set the query_cache_limit high because the large result sets may be the ones that benefit most from caching.

are both valid in the general case, but we’re more interested in the specific case – our application. This points to the need for tuning this setting, so be ready to do some experimentation and measurements. The best number is the one that provides the fastest query execution times in production.

Performance Consideration: 
query_cache_min_res_unit‘s default is 4KB. If the cache contains mostly small result sets, reducing this value may increase performance. If the cache contains mostly large result sets, increasing this value may be the way to go. In either case, benchmark and compare the Qcache_free_blocks and Qcache_lowmem_prunes status variables.
Pruning vs. Invalidation:  When a process secures a write lock on the underlying data, all cached queries that use that data are removed from the cache. This is invalidation.When the cache becomes so big that the current result set cannot be stored, MySql removes the least-recently-used result sets to make room for the new one. This is pruning. The number of prunes is shown in Qcache_lowmem_prunes.
Performance Consideration: 
Keep Qcache_lowmem_prunes as low as possible by increasing query_cache_size. However, larger caches may actually decrease performance, so measure, measure, measure.
The storage/retrieval key is the byte-by-byte text of the query. It’s case sensitive and white space is not ignored. Implications: Queries are case-sensitive. A multiline query is different from a single-line query, even if it is identical in all other respects.
Performance Consideration: 
If two queries are logically the same, make them byte-by-byte identical so they can be cached once instead of twice. The query cache will fill up less often.
Performance Consideration: 
SHOW VARIABLES LIKE ‘%query_cache%’; and SHOW STATUS LIKE ‘%qcache%’; provide information that is useful when tuning query caches.
Performance Consideration: 
query_cache_wlock_invalidate invalidates cached result sets immediately when a write lock is granted on the underlying data. If it is set to true, queries on MyISAM tables cannot be served from the query cache when there is a write lock on the table. Setting it to false allows queries to be served from the query cache while the write lock is active. Recommendation: Set query_cache_wlock_invalidate to false if the application permits.
Performance Consideration: 
Query caching within a DBMS is not as fast as server-side-script caching. Server-side-script caching is not as fast as web-server caching. Web-server caching is not as fast as HTTP caching. In general, the earlier the caching kicks in, the better.
Performance Consideration: 
Queries that create summary tables, pivot tables, or cross tabulations should not be executed live in response to a request. Create the tables with a background process that runs as frequently as necessary, then execute the live queries on the tables so created. The live query will now be simple and will perform much better.

Query Caching With memcached

MySql announced a preview of memcached on the InnoDB and MySql Cluster storage engines in April 2011. Like query caching (above), memcached caches result sets so queries can be run once rather than a whole bunch of times. However, it goes way beyond that by providing a key/value form of caching, which can be used for NoSql storage and retrieval.
Performance Consideration: 
Should we use query caching or memcached? Today’s answer is easy: Use query caching in production because memcached is still in its infancy on MySql. However, as this technology matures, the answer may change. Keep your ear to the ground on this one.
The database administrator can also ask MySql to keep a list of queries that run slowly.
Performance Consideration: 
Logging slow queries in production shows us which queries take the longest to execute, but it can cause its own performance problem while doing so. It must be done with care. Although it may be helpful in the test environment, slow query logging should not be always-on in production.
As gtowey illustrates, the slow query log can also be used to show the overall impact of each query on the server’s total load. Relatively fast queries that are run frequently may contribute more to the server’s total load than slow queries that run infrequently. If you want a faster database, read the article.

Ongoing Maintenance

Modern database management systems take care of most maintenance tasks automatically, but there are still things that need to be done on an ongoing basis. If the system handles these tasks automatically, we need to configure it appropriately. If it doesn’t handle a task automatically, we need to build a cron job. If neither approach works, we may have to modify the database as problems develop (which is one reason why monitoring is so important).
Performance Consideration: 
Identify tasks that need to be repeated on a regular basis and build cron jobs for them. Run the cron jobs at low priority at times when the database management system is relatively quiet. Watch out for starvation. If the database is too busy, the cron job may not finish before the next one begins. Keep cron jobs small and fast.

Performance Consideration: 

With up-to-the-second live data, the advice to use static HTML instead of dynamic HTML is not appropriate. Live data must be recalculated and regenerated for every live query. There’s no way around that. The trouble with live queries is that the user is waiting while the query is executing.

However, much of the data we consider up-to-the-second need not be up-to-the-second. If the application permits, some data can be up-to-the-hour or even up-to-the-day. Even though it’s not static, it’s not quite as dynamic as it first appears.

Implement up-to-the-day data queries as static HTML. Build a background task that re-creates the static HTML page daily. Run the background task at a low priority level at the quietest time of the day. For live queries, access the static HTML page that was created by the background task. If up-to-the-day isn’t dynamic enough, run the background task more often.

One possible glitch: Use a monitor to make sure the background task fully executes at the proper intervals.

Much data we consider dynamic can actually be a day old, an hour old, etc. If so, it need not be recalculated and regenerated for every live query. It all depends on the application’s needs.

Performance Consideration: 
Here’s a checklist of some maintenance tasks your system may need to take care of on a regular basis. Some are executed less often than others. Again, the application’s needs dictate the frequency.
  • Look for Corrupted Tables, Indexes, and Views: mysqlcheck –check –databases dbname or CHECK TABLE or ALTER TABLE … CHECK PARTITION
  • Ensure Offline Maintenance Operations Don’t Change the Data: CHECKSUM TABLE(before and after maintenance)
  • Rebuild Key Distributions: mysqlcheck –analyze –databases dbname or ANALYZE TABLE or ALTER TABLE … ANALYZE PARTITION
  • Repair Data and Index Corruption: mysqlcheck –repair –databases dbname or REPAIR TABLE or ALTER TABLE … REPAIR PARTITION
  • Defragment and Pack Tables: mysqlcheck –optimize –databases dbname or OPTIMIZE TABLE or ALTER TABLE … OPTIMIZE PARTITION
  • Backup Tables: mysqldump–create-options –routines –triggers dbname
  • Flush Data, Caches, Locks, Logs, Tables, the Query Cache, and More: FLUSH
  • Rotate Logs

Benchmarking & Monitoring

Like all of the subsystems that help process the user’s request, MySql’s performance should be monitored. Uptime is the first and most important thing to monitor, but keeping an eye on specific, oft-used transactions is also advisable.

MySql offers two tools to help identify troublemakers: the slow query log and the performance schema. EXPLAIN, SHOW STATUS, and SHOW VARIABLES also provide important information to help us analyze performance issues.

The MySql Performance Schema provides both coarsely and finely grained instrumentation at a very low level without impacting performance. Because it is so new and has so many options, few people are using it effectively. That is expected to change because nothing else provides this much information.

Performance Consideration: 
Take the time to learn about the MySql Performance Schema. Try it out on one of your projects.

Coming Soon

The service provider (DBMS) is now finished serving the request. Now it’s time to send the result back to the client. That journey is described in Part VIII, coming soon to the Monitor.Us blog.

For quick reference, here is the series’ table of contents:

Post Tagged with

About Warren Gaebel

Warren wrote his first computer program in 1970 (yes, it was Fortran).  He earned his Bachelor of Arts degree from the University of Waterloo and his Bachelor of Computer Science degree at the University of Windsor.  After a few years at IBM, he worked on a Master of Mathematics (Computer Science) degree at the University of Waterloo.  He decided to stay home to take care of his newborn son rather than complete that degree.  That decision cost him his career, but he would gladly make the same decision again. Warren is now retired, but he finds it hard to do nothing, so he writes web performance articles for the Monitor.Us blog.  Life is good!