This video and post are based on a chapter from a new book on which I’m working. In the comments, please shoot as many holes in it as possible before it ends up in a printing press.
A text version of the video can be found below the video.
October 8, 2018 Update: This post originally referred to “problem scanning” as “status monitoring”, however, I’ve decided to rename “status monitoring” to “problem scanning” since this term seems to be easier for people to remember, is more descriptive, and sounds more distinct from “performance monitoring”. Sorry for any confusion, but there will likely be other changes like this as I gather feedback and refine the ideas in the book. The video below still references “status monitoring”, however.
I’ve seen a lot of dashboards that failed to meet users’ and organizations’ expectations. There are a variety of reasons why this happens and, in this post, I want to focus on one of the most common ones, which is that the people who designed the dashboard didn’t fully understand the distinction between problem scanning and performance monitoring. When that happens, the dashboards that they end up designing don’t fulfill either of these needs well.
Before listing the differences between problem scanning and performance monitoring and why those differences matter, though, I should probably clarify how I’m using these terms, since not everyone uses them in the same way. I find it useful to think about both of these as “modes” that users can be in. When users feel that they need data, they can be in either “problem scanning mode” or “performance monitoring mode” (or in one of several other user modes that I’ll discuss in other blog posts). Users regularly flip between these modes depending on what’s going on with their work.
Problem scanning mode
When a user is in “problem scanning mode,” they want the answer to one, single question:
“Is there anything that I need to react to this minute (or this hour, day, week or month, depending on how often metrics are refreshed) and, if so, what?”
When a user is in “problem scanning mode,” the only thing that they’re trying to figure out is if there’s anything new that they need to deal with right now and, if there isn’t, they’ll move on to something else. In this mode, users aren’t trying to initiate new projects, alter their priorities, plan for the future or do anything else that could be described as proactive. They’re only interested knowing if there’s anything that they need to react to. A valid analogy for problem scanning mode would be that of a person driving a car down a highway. While doing this, the driver only needs to know if everything is O.K. or if something requires their immediate attention (running out of gas, going too fast, engine overheating, etc.). Some might also call this “tactical mode,” or “operational mode.”
Performance monitoring mode
When a user is in “performance monitoring mode,” on the other hand, they want the answers to a very different set of questions:
“What is the overall health of our organization (team, department, company, etc.)?”
“Are we doing better or worse than before?”
“Are we achieving our strategic goals?”
“Do we need to update our strategic goals or targets?”
In this mode, they want to assess how things are going to determine if new projects need to be initiated, priorities need to be rearranged, resources need to be reallocated, etc. Typically, users are only in performance monitoring mode in review or planning sessions, i.e., when they’re assessing how successful they’ve been, making decisions about the future, or both. Performance monitoring mode is much more proactive than problem scanning mode. In our driving analogy, it would be equivalent to the times when the driver pulls over to check on trip progress, decide if they need to take another route, change their destination, etc. Some might call this being in “strategic mode” or “planning mode.”
Neither of these user modes is better or worse than the other; both are important and necessary, and users will flip between them depending on what’s going on with their work. Based on these definitions, though, some readers may argue that an information display for problem scanning should be the same as one for performance monitoring, just with more disaggregated (i.e., more granular) versions of the same metrics needed for performance monitoring. I think that the differences between problem scanning and performance monitoring go far beyond that, though, and I’ve summarized those difference in the table below. After reviewing this table, it may be clearer why those who try to create a single dashboard to support both of these needs often end up with a display that doesn’t meet either need well (but pipe up in the comments if you’re still not on board…).
Differences between problem scanning displays and performance monitoring displays
Problem scanning displays | Performance monitoring displays | |
---|---|---|
Target user(s) The user or set of users for whom the display should be designed | Role-specific Different roles (i.e., sets of employees with the same job description) need to respond to different problems that may arise in the organization, so each role needs to see a specific set of information presented in a specific way in order to easily and reliably spot those problems. One problem scanning display is, therefore, needed for each role. If we try to design a single problem scanning display for use by multiple roles (e.g., a dashboard for use by the entire executive team), it will contain a large number of metrics that aren't relevant to each role, which will impair the display's ability to quickly and effectively answer the basic problem scanning question. | Organization-specific One of the common goals of performance monitoring is to align everyone in an organization (a team, department, agency, etc.) so that they're all working toward the same strategic objectives and have the same definition of success. This means that a single performance monitoring display can and should be used by the entire organization (i.e., multiple roles). |
Target roles The types of roles that will use each type of display | Operational roles only Problem scanning displays are needed by roles that include responsibility for the day-to-day operations of the organization. Problem scanning displays are too detailed and generally not well-suited for purely planning or strategic roles such as board members, advisors, etc. |
Operational and non-operational roles Performance monitoring displays are also used by those with day-to-day responsibilities to monitor the performance of their group (team, department, company, etc.), however, they'll use them much less frequently than problem scanning displays (see "Review frequency" below). In addition, however, performance monitoring displays are useful to strategic roles such as board members, strategic advisors, investors and other non-operational roles that set high-level goals and make strategic decisions. |
Review frequency How often users will need the information on each type of display | Frequent Because users don't know when problems are going to occur, they need to know the answer to the basic problem scanning question ("Is everything OK?") continuously. Depending on how often metrics are refreshed, this may be every minute, hour, day, week, or month. | Infrequent Performance monitoring can't and shouldn't be done every minute, hour, day, or week, and possibly shouldn't even be done monthly. Answers to questions such as, "How are we tracking toward our strategic goals?" and "Is our organization improving or getting worse?" are usually only needed during planning or review meetings, and other monthly, quarterly or even annual events. |
Number of metrics that need to be monitored | Many In a modern organization that runs software to monitor many aspects of the organization, users are now expected to notice and respond to a very wide variety of problems, which usually means that hundreds, thousands, or hundreds of thousands of metrics could require immediate action. For example, a Vice-president of Retail Sales for a chain of stores may need to know if any of a dozen metrics go south for the chain's 100 largest stores, which means monitoring 1,200 metrics (or, in this case, instances of metrics), in addition to the many other metrics that could require her attention. | Few I agree with the performance measurement experts who assert that the overall performance of an organization can be captured in a few dozen carefully chosen metrics, so that's all that are needed for performance monitoring. Showing thousands of metrics to users who are in performance monitoring mode makes it much harder for them to assess overall performance. |
Metric selection criteria The criteria used to determine if a given metric belongs on a display | Personally actionable It's not necessary to explicitly connect metrics for problem scanning to higher-level goals, although those goals can and should influence which metrics are included on the problem scanning displays for each role. | Meaningfully indicative of group performance The main criteria that should be used to determine if a given metric belongs on a performance monitoring display are, "Is this metric a meaningful indicator of the overall performance of the group?" and "Is this metric a meaningful indicator of progress toward the group's strategic goals?" |
Effectiveness evaluation criteria The criteria that are used to evaluate how well or poorly a given display is serving users and the organization | Problem scanning question When evaluating the effectiveness of a problem scanning display, the main criterion is how quickly and accurately it enables users to answer the basic problem scanning question. Considerations such as the thoughtfulness of the display's layout or its use of color are still important, but only to the extent that they make it quicker and easier to answer the basic problem scanning question accurately. | Performance monitoring questions The effectiveness of a performance monitoring display should be evaluated based on how well it answers the various performance monitoring questions listed above, which is very a different set of questions than the basic problem scanning question. |
Hopefully, this table makes it clear why trying to support these two very different needs with a single display forces many painful design compromises and results in a display that doesn’t meet either need well. Trying to design a display that does both is like trying to design a car dashboard that provides information that users need while driving AND while reviewing trip progress, planning routes and choosing destinations. Unfortunately, this type of all-in-one, accident-waiting-to-happen display is exactly what I see in many organizations.
So, the next time that you’re asked for a dashboard, try to figure out what data-related need prompted users to ask for it in the first place. Are they asking for answers to the problem scanning question, the performance monitoring questions, or both? If both, consider making your and your users’ lives easier by creating two displays, instead of trying to meet both needs in one. If it sounds like some other need entirely, such as looking up specific values or diagnosing problems, stay tuned for future blog posts in which I’ll discuss other types of displays to address those needs, as well.