Tips and Insights on:

  • Website, Server, Database monitoring
  • Optimization and Web Design
  • Uptime and Server performance

This Week in Website Performance

website-performance-weekly-monitorusThis Week in Website Performance is a weekly feature of the Monitor.Us blog. It summarizes recent articles about website performance. Why? Because your friends at Monitor.Us care.

Essential Server Performance Metrics you should know, but were reluctant to ask

Author: zhirayr.

There are a great many important metrics that allow you to evaluate the health of your server, of varying usefulness. This article presents a minimal set of metrics to begin monitoring to understand the state of your web application. Read more…

Category: Website Performance

Key Linux Performance Metrics

Much has been written about how to set up different monitoring tools to look after the health of your Linux servers. This article attempts to present a concise overview of the most important metrics available on Linux and the associated tools.

CPU utilization

CPU usage is usually the first place we look when a server shows signs of slowing down (although more often than not, the problem is elsewhere). The top command is arguably the most common performance-related utility in Linux when it comes to processes and CPU. By default, top displays summary percentages for all CPUs on the system. These days, most CPUs are dual-core or even quad-core – essentially two or four CPUs in one chip, so to view the statistics broken down by CPU (or core), use the “1″ command in top. To sort processes by CPU usage type “O” followed by a “k”. Read more…

Category: Website Performance

ASP.NET Performance Tips

In this article you can find some easy steps you can follow to maximize the performance of your ASP.NET applications.

1. Use caching.

When multiple users request the same content from your page, your server can respond with a cached copy of that content. This way, only the first user will have to wait for the actual processing, all the next ones will have the content much faster. In this scenario it’s possible for the user to get old cached content instead of an updated new one. To avoid this use the SQL Cache Dependency. Here’s some information from Microsoft about it:
You have two options with caching. You can use output caching or partial page fragment caching. The key here is to make a clear separation between the static and dynamic content of your pages.

2. Use compression.

ASP.NET allows you to compress your pages before your server sends them to the client. It uses GZIP compression, which is not very CPU intensive. An easy way to improve the performance of your pages is to enable this feature. Read more…

Category: Website Performance

Website Performance: Building Tables and Indexes

This is the second of four articles about Database Management Systems’ (DBMS) performance. The first of these four articles presented an overview and some installation tips. The article you are now reading talks about building the database’s tables and indexes. Parts three and four get into the meat of accessing the database, with non-SQL tips in the third article and SQL tips in the fourth.

Website Performance: Taxonomy of Tips introduced a classification scheme to help us organize the many performance tips found on the Internet.  Database Management Systems fall into category #3.2 (the-server-side-script-accesses-a-service).

Although this article is based on MySql experience, the concepts apply to other DBMS’s as well. Read more…

Category: Articles

Berkeley DB Performance Tuning

 When it comes to performance, BerkeleyDB’s cache size is the single most important configuration parameter. In order to achieve maximum performance, the data most frequently accessed by your application should be cached so that read and write requests do not trigger too much I/O activity.

BerkeleyDB databases are grouped together in application environments - directories which group data and log files along with common settings used by a particular application. The database cache is global for all databases in a particular environment and needs to be allocated at the time that environment is created. Most of the time this is done programmaticaly. The issue of course is that, the optimal cache size is directly related to the size of the data set, which is not always known at application design time. Many BerkeleyDB applications end up storing considerably more data than originally envisioned. Even worse, some applications do not explicitly specify a cache size at all, so the default cache size of 256KB is allocated – which far too low for many applications. Such applications suffer from degraded performance as they accumulate more data; their performance can be significantly improved by increasing the cache size. Luckily, most of the time this can be achieved without any code changes by creating a configuration file in your BerkeleyDB application environment. Read more…

Category: Move to Monitis
trusted by
trusted by trusted by trusted by trusted by trusted by
About Monitor.Us

Monitor.Us, the free version of Monitis, the specialist web and cloud systems monitoring service that provides website monitoring, web page load testing, transaction monitoring, application and database monitoring, web page load testing,transaction monitoring, application and cloud resource monitoring, and server and internal network monitoring within one easy-to-use dashboard. Monitor.Us gives home and small business users access to leading-edge availability and performance monitoring within one easy-to-use dashboard. For more information please visit Monitor.Us.

Follow Monitor.Us on Facebook
Follow Monitor.Us on Twitter