Performance targets aren’t the same as alert thresholds (book excerpt)

tl;dr: This excerpt from my upcoming book, Practical Dashboards, is the second in an eight-part series of posts on how to determine which metrics to visually flag on a dashboard (i.e., with alert dots or different-colored text) in order to draw attention to metrics that require it. Determining when to flag or not flag metrics on a dashboard can be a messy process within organizations because people often disagree about what should be considered “good” or “bad” ranges for a given metric. There’s another, less obvious cause of controversy in such discussions, though, which is that people often talk about two very different types of flagging criteria without realizing it: criteria for indicating when action is required and criteria for indicating whether a metric is performing well or not. While these might sound similar, their fundamental purposes and the ways that we go about setting them are very different.

As I wrote in a recent post, understanding the differences between problem-scanning displays (which I used to call “status-monitoring displays”) and performance-monitoring displays is essential in order to design effective dashboards. In this post, I’ll discuss how the criteria that we use to determine which metrics are flagged on each type of display are also quite different. In fact, I call the flagging criteria for each type of display by two different names: “alert thresholds” for problem-scanning displays and “performance targets” for performance-monitoring displays.

Why do I draw this distinction? Hopefully, the list of differences below will make it obvious why any discussion about determining when to visually flag metrics on a dashboard will be fraught with misunderstandings unless everyone agrees at the outset whether they’re trying to set alert thresholds for problem-scanning displays or performance targets for performance-monitoring displays:

1. They serve different purposes.

Performance targets: The main purpose of performance targets is to communicate where management would like the organization to be; they’re goals that are based on desire (although that desire is, ideally, tempered with a dose of realism and some good statistical analysis).

Alert thresholds: The main purpose of alert thresholds, on the other hand, is to trigger visual alerts for metrics that require attention on problem-scanning displays. Alert thresholds aren’t about desire or where we want the organization to be, they’re about the point at which we’d take action, which isn’t the same thing. Alert thresholds aren’t targets or goals, they’re thresholds, a kind of virtual tripwire. Yes, some alert thresholds may be influenced by performance targets for related metrics, but they’re not the same thing since they serve different purposes.

2. We ask different questions when setting them.

Performance targets: When setting performance targets for a given metric, we’re asking ourselves how well the organization could realistically perform and so the focus is on the future.

Alert thresholds: When setting alert thresholds, though, we’re asking ourselves at what level we’d need to take some kind of action in response to a given metric, which is a different question that will usually have a different answer. The focus isn’t on the future since the main consideration when setting alert thresholds is usually the past behavior of the metric in question.

3. They require different levels of consensus.

Performance targets: Performance targets should be agreed upon by the entire team, group or organization since they’re a reflection of the high-level strategic goals toward which everyone should be working. If people don’t agree on performance targets, it may mean that they have different ideas of what the organization is trying to achieve and what “success” looks like, which would be a major organizational problem.

Alert thresholds: Alert thresholds, on the other hand, don’t need to be the same for the entire organization or team. In fact, they can and should be role-specific, or even specific to a particular person. Very often, the threshold at which one person would need to respond to a metric is different than the threshold at which another person would need to respond to that same metric since those two people may, for example, have different levels of responsibility.

4. They have different “neutral ranges.”

Alert thresholds: In order to be useful, alert thresholds must have a wide “neutral range,” i.e., the range in which the metric doesn’t get visually flagged on a dashboard. If the neutral ranges of alert thresholds on problem-scanning displays are too narrow, we end up with the dreaded “Christmas tree syndrome,” where our display is littered with red and green visual flags even on a normal day, making it impossible to spot the real problems and opportunities that actually require attention.

Performance targets: Performance targets, on the other hand, can have narrower neutral ranges since, if the organization has gone through a proper KPI selection process, there should be no more than about 30 metrics on a performance-monitoring display (so Christmas tree syndrome is less of a concern) and a performance-monitoring display should indicate when performance-monitoring metrics are even moderately above or below where management would like them to be.

5. Only one type can be automated.

Performance targets: Setting performance targets should be a careful and thoughtful process during which the management team discusses each metric in detail and agrees on realistic expectations for that metric. While this is a manual process, it’s manageable since there should never be more than about 30 performance-monitoring metrics to set targets for if the organization has gone through a proper KPI selection process.

Alert thresholds: On problem-scanning displays, though, there will usually be hundreds, thousands, or even millions of instances of metrics that could indicate a problem that requires attention and so manually setting alert thresholds for all problem-scanning metrics and instances of metrics isn’t feasible. In the final installment of this blog post series, I’ll discuss how to use simple statistics to automatically set useful default alert thresholds for problem-scanning displays.

For the remainder of this blog post series, I’m going to focus on alert thresholds since setting performance targets falls more within the realm of strategic planning and performance management than reporting systems and, therefore, falls outside of the scope of Practical Dashboards. Plus, there are already plenty of great books that deal with setting performance targets but few that discuss how to set alert thresholds effectively.

In Parts 3, 4, 5, and 6 of this series, I’ll look at the three most common types of alert thresholds that I see on dashboards and explain why each has serious drawbacks and limitations. Part 7 will introduce the four-value thresholds that I recommend in my consulting engagements, and Part 8 will introduce some simple statistical methods that can be used to set alert thresholds automatically for thousands or even millions of metric instances.

As always, feedback—positive or negative—is welcome and appreciated in the comments below.

To be notified of future posts like this, subscribe to the Practical Reporting email list.