Maintaining dashboards and analytics is hard and only gets harder as you further unlock the world of self-serve analytics. This makes sense. More stuff being created means more stuff that can break or slip between the cracks. This post explores the “dashboard maintenance” problem more in-depth and lays out one potential way to address it.
Things Fall Apart
As I mentioned before, metrics and visualizations inevitably break over time. Maybe you released some product changes that affect what things mean. Maybe you made a transformation-layer change to the underlying data. Maybe there’s an issue with your event tracking setup. There are a number of things that can go wrong at any moment. This is a problem, but not the problem.
The real problem isn’t that things break. It’s that we do a poor job of effectively surfacing the things that break, and prioritizing which of them are important enough to fix.
When your organization hits 50 people or even before that, it’s not scalable for your analytics team to moderate your entire analytics workspace. You need your analytics end-users to help moderate. This means Product Managers, Marketers, Designers, and the rest of the bunch. Some refer to this as "bottoms-up content moderation" in practice. The "bottoms-up" part is the important aspect to understand, and the harder part to get right. So what might it look like? Let’s imagine a better workflow:
- Jess from Design sees a line chart in which the total drops to zero out of nowhere. That’s weird. Right?
- She signals to analytics that something is going on here. This could be done informally through Slack or another messenger. Even better if it happens within the analytics platform through comments or a flagging feature.
- This continues to happen over time. Sarah from Product does the same on another chart. James from Sales follows suit and flags another chart. And on. Let the bottoms-up feedback flow and flow frequently.
- There’s a catch: Analytics all of a sudden has a giant backlog of “bugs” or “weirdness” to evaluate and potentially refactor. They are also trying to make progress on big project X and Y but this intimidating backlog continues to grow and get in the way. This isn’t fun for any analyst, but there’s an age-old tool to make this a bit nicer: prioritization.
- Saying that Analytics should prioritize isn’t a solution by itself. Sure, it's possible, but it's a hassle and another barrier to fixing things. It would be better if things were prioritized automatically. This is where flagging is quite nice. If your analytics platform knows that something is flagged “This looks wrong” and it also knows which dashboards are used most often, then you could imagine a pretty straightforward path to a stack-ordered list of "things to look at" for analysts. Isn’t that nicer?
- Now Analytics can confidently fix only the things that matter most by leaning an automatically-generated prioritized ranking, and some common sense of course. They check out the charts in question and push changes. Things are humming again in Chart A and Chart B. Now it’s time to close the loop. They can manually ping the people that flagged this chart in their messenger of choice, or even better, they can tell the analytics platform that the chart is up to date and it pings the concerned parties with any additional context included.
- Keeping things within the analytics platform has one big benefit: automated documentation. Being able to see when something broke, why it broke, and what the changes looked like, is kind of a super power for analytics teams. And you don’t have to go out of your way to achieve that here. It just happens organically. Which is great.
So that’s the gist of this proposal. Take the scattered and failure-prone workflow that exists today for moderating analytics content and tighten it up. Bring it within the analytics platform itself. Keep all parties in the loop. Gather explicit data and automate documentation. Maybe this would work, maybe it wouldn't. But there's only one way to find out: Build the thing and see what happens.