Performance & Optimization
Optimization is the attempt to increase or decrease some measureable value to its maximum or minimum. Examples: We can optimize a website for conversion to sales, time spent on site, number of visitors, and so on.
We talk about improving performance as if it is a form of optimization. We consider it to be the attempt to decrease the time it takes for a page to load. However, as discussed below, there is much more to the story.
From the User’s Viewpoint
Ultimately, the users decide a website’s fate. If they condemn its performance en masse, that’s the end of the website. No explanations or pleas for understanding will change this. Welcome to life in the cyberworld!
Users tend to judge a website as acceptable, slow, or too-slow-to-bother-with. If they judge a website as acceptable, they may continue to use it now and at times in the future. If they judge it as too-slow-to-bother-with, they will go elsewhere and will not likely return.
If the users judge the website as slow, but not too-slow-to-bother-with, they may stick around, but they will not be thinking kind thoughts. In some ways, slow is worse than too-slow-to-bother-with because of the incessant repetition of those negative thoughts. Anything repeated often enough will be believed and will likely be shared with others.
A recent study showed that Brits spend an average of two days a year waiting for slow websites. Can you imagine what they’re thinking and saying about the companies that operate those sites?
What Exactly is the User’s Problem?
Since the users’ acceptance or rejection of the website determines its fate and since that acceptance or rejection is partly determined by performance, we are compelled to define performance in terms of the end-user experience. Let me repeat that: We are compelled to define performance in terms of the end-user experience.
So what is it about the user’s experience that leads him to an acceptable, slow, or too-slow-to-bother-with decision? It seems almost obvious, doesn’t it? It’s the time spent waiting.
Every time the user performs an action (e.g., clicks a button, moves the mouse, presses a key on the keyboard), he expects to be able to perform his next action immediately. Having to wait can annoy him, distract him, or merely make him less productive. Since companies value their employees according to their productivity, making these employees wait reduces their value to their employers, and that puts their jobs at risk. This point is not lost on our users.
The user does not always have to wait until the page is fully loaded to get on with the next thing he wants to do. The elements with which he wants to interact may become visible and functional earlier. If the web page is not jumping around because of reflows, the user can continue. He’s done waiting even though the web page is not fully loaded.
We techies tend to focus on what happens from the time a user requests a web page to the time the page is fully available. We tend not to mention the wait times after the page is delivered. Example: If a user rolls the mouse over an element, he may have to wait for a popup to time out and disappear. We don’t talk about that in performance discussions, but we should.
How to Measure Performance
Performance is inversely proportional to the time the user spends waiting, so the user’s wait time, not the application’s wait time, is what should be measured.
If the keyboard and mouse are silent, computer-based measurements may count inactivity as wait time when it is not. If the inactivity is due to a required non-computer activity (e.g., reading, thinking, planning), then it should count as productive time, not wait time.
If the inactivity is due to a user’s decision to go do something else (e.g., do some other job, take a break), then it should not count as wait time or productive time (for this application). However, any of these things should count as wait time if the user does them merely to pass the time while waiting for the computer to do its thing.
Semi-productive times should not be scored as heavily as the thumb-twiddling times. If the user has something productive to do while he is waiting, that’s still a performance hit, but it’s not as bad as waiting and having nothing to do. User multi-tasking, even when it is not within the developer’s control, can affect the perception of performance. And perception is reality.
Let’s consider an example of that last point. If an automated task will take about half an hour, but requires the user to interact in some trivial way (e.g., click a button to move to the next step or make a choice that could have been made up front) every minute or two, the user will likely consider that to be poor performance. However, if he knows that he has a free half-hour because all the questions are answered and all the choices are made up front, he can switch to some other job and be productive elsewhere. Please note both points: He must be free for some amount of time, and he must know that he is free for that amount of time. [Please note: This is not the best option. It is merely a better option. The best option would be to spin that task off into a concurrent background process and let the user continue to work with the application.]
Monitoring and analyzing user experience is the best way to measure a web page’s performance. Performance should be measured as P ÷ (P+W), where W is the total wait time and P is the total productive time. Special attention should be paid to the individual wait times that make up the total – the large ones should be examined closely.