tl;dr: This excerpt from my upcoming book, Practical Dashboards, is the third in an eight-part series on how to determine which metrics to visually flag on a dashboard (i.e., with alert dots, different-colored text, etc.) in order to draw attention to metrics that require it. In this post, I look at the “% change from previous period” method of flagging dashboard metrics and why, despite being extremely common, this method for drawing attention to metrics could be worse than useless. In a later post in this series, I’ll introduce a more useful approach called “four-threshold” visual flags.
Probably the most common way to visually flag metrics that require attention on a dashboard is the “% change from previous period” method, whereby each current value has a “vs. previous day” (or previous week, or previous month, etc.) flag next to it, usually expressed as a percentage change with an indicator of positive or negative change:
% change flags are appealing because they’re easy to implement and they don’t require performance targets or alert thresholds to be set for each metric. They also look useful, i.e., like they indicate which metrics are doing well (green/positive flags) and which metrics require attention (red/negative flags). I’m not the first to point out, however, that this apparent usefulness is illusory and that they have major drawbacks that put their basic usefulness into question:
They generate a lot of false alarms. A change of -14% in today’s sales vs. yesterday might mean that we’re getting killed by that new competitor or that our e-commerce site is crashing, i.e., that it’s time to panic. Or, it might mean that yesterday’s sales were unusually high and we’ve simply returned to normal sales levels today, i.e., that everything’s fine. I’ve seen a lot of completely unnecessary panics result from these types of false alarms.
They don’t take targets into account. Everyone might be pleased to see that the number of new customers is up 8% this month vs. last month. What no one may realize, however, is that the customer acquisition rate is still well below where it needs to be in order to meet our growth targets, so the current number is actually a problem that needs to be solved, not the good news that the +8% flag on the dashboard would suggest.
They don’t reliably draw attention to metrics that require it. A change of -2.1% in employee satisfaction vs. last week may be a minor concern that requires no action, but the same -2.1% change in website uptime would be an all-hands-on-deck crisis, so a small % change value doesn’t necessarily indicate that the metric doesn’t require attention. Similarly, a large % change doesn’t necessarily indicate that a metric requires attention, as would often be the case for a metric that normally bounces around a lot from day to day. Given that that’s one of the main reasons for having visual flags on dashboards in the first place, this is a pretty serious limitation.
They produce “Christmas tree syndrome.” Since current values are almost always at least a little higher or lower than the previous period’s values, every metric gets a “vs. previous period” flag beside it, creating a visually overwhelming wall of red and green indicators—even if everything is actually fine. Sometimes, dashboard creators will try to mitigate this by setting “don’t flag” ranges whereby metrics aren’t flagged if the “vs. previous period” value is, say, between -2% and +2%. While this would cut down on the number of flags on a dashboard, it doesn’t address any of the other limitations in this list, and it introduces other, new problems. For example, for some metrics, a change of -1.5% might actually be a big problem.
They’re mostly “noise.” Consider the following seven-day sequence of “orders processed”:
On any given day in this sequence, what does the “vs. previous day” value on that day actually tell users? Can they see when this metric requires attention, as opposed to when it’s simply experiencing normal day-to-day fluctuations that require no action? Or even whether the metric is generally trending up or down over time? The technical term for this type of non-information is “noise.”
So, to summarize…
A big % change may or may not require attention.
A small % change may or may not require attention.
A green % change may or may not indicate a metric that’s doing well.
A red % change may or may not indicate a metric that’s doing poorly.
A positive % change may or may not indicate a metric that’s generally trending upward.
A negative % change may or may not indicate a metric that’s generally trending downward.
So, are “% change from previous period” flags really worse than useless? After reading this, you can probably guess my answer…
The next three posts in this series will list the drawbacks of the three other types of visual flags that I commonly see on dashboards: Single-threshold flags, % Deviation from Target flags, and Good/Satisfactory/Poor ranges. I’ll then introduce the four-threshold flags that I now recommend since this type of visual flag doesn’t have any of the drawbacks or limitations that I list for the four common types. I'll conclude the series with a post on useful statistics for setting visual flag thresholds automatically.
To be notified of future posts like this, subscribe to the Practical Reporting email list.