With most of the dashboards I have built over the last couple of years my focus has been on empowering decision makers by providing them with the ability to not just see the results but also to explore the data.
Dimension hierarchies are key to this of course, and, as I go through the process of recreating and updating many of these dashboards it has become clear that I add a time dimension – days, weeks, months, quarters and years, almost habitually.
Other dimensions are chosen that are specific to the application, but the time dimension, that’s always the same. Days, weeks, months, quarters, years. So the question is, how does this help the user doing some exploration? In fact, take a step back from that. Just how much data exploration actually happens? How often do decision makers just open a dashboard and start clicking on stuff looking for smoking guns without deciding first what the crime is?
It turns out that in our organisation, data exploration takes place very little with the majority of users, but very regularly with top management – usually in scheduled sessions that I am involved in. It was during one of these regular sessions that I began to wonder if I had been misusing the time dimension. We were looking at a data validation dashboard. This dashboard tells us the quality of the datasets that all the other dashboards use. It tells us how much we can trust our conclusions. The question came up, and it sounded innocent enough at first. How is the data quality looking for this month’s entries?
We drilled into the figures for the month. They looked good. Much better than the overall results. Everyone, this month, was taking care to record their activity properly, not surprisingly actually, as there were incentives in place to improve data entry habits. The incentives were, quite clearly, working. Then we drilled back to the data quality numbers for the full dataset. They looked exactly the same as they had two minutes before. It struck me then that, while the monthly result was important for gauging current habits, they had only a minor influence on the result for the full dataset. Since we used the full dataset to drive all reporting and particularly predictive analysis, it was the result for the full dataset that mattered. The current month’s data could be 100% accurate and it would be of little consequence if the overall accuracy of the data being used in reporting was, say, 50%. Focussing on the current month would have distracted us from the true objective of the data validation dashboard. It made me realise that, when applying time dimensions, it is always important to apply a level of granularity that is relevant to the measures that the time dimension is going to be paired to, and whether a time dimension is even relevant to the measure at all, or if it is a distraction.
So when would I use a Time dimension?
1) To show a trend. Why? Because a trend is either anticipated (the sales team are targeted on growth) or is a warning (uh oh, sales are dropping).
2) To show the influence of a change. Let’s say that we sign on to a specialised job board because we need to find better quality CVs relevant to a particular discipline. The expected influence of the change would be more CVs being sent to hiring managers for roles in that discipline. However, the time periods that are important here aren’t weeks, months or quarters, but Before and After. We don’t need to see a line that rollercoasters around the 30 mark month on month then climbs up after the contract signing date to start see-sawing around the 50 mark. What we really need to see is Before: 30 per month average; After: 50 per month average. Just two data points that will tell a clearer, more concise and less ambiguous story about the effect of the new board.
Apart from those two scenarios, I am now cutting time dimensions out of any dashboard that cannot answer this very specific question: What business relevant fact do I learn by slicing this measure up into time groups?
This approach can be applied to all measures and dimensions. It’s all very well having an all singing and dancing dashboard that can dissect the lunchtime bill down to the last byte and has more flexibility than Play-Doh, but if the report doesn’t have any particular focus, it’s not going to be easy extracting answers relevant to the business decisions you need to make. Indeed it is more likely to distract you from those decisions.
I was for a year or so part of an innovation lab. We were a group of people who were quite good at coming up with new ideas or just generally being inventive. When you put us in a room and just told us to innovate, we could dream up all sorts of weird and wonderful schemes to achieve a random assortment of objectives. Few of these ideas escaped the meeting room, and I don’t think any ever actually made it beyond the planning stage. On the other hand, when you gathered us all in a room and told us you had a particular problem that needed solving, so we were given a target to focus our efforts on, then we really could – and did – change things that mattered for the better. BI Dashboards are a lot like that. A dashboard that can do anything but doesn’t have a particular question to answer, will rarely provide any useful answers. A dashboard that is purpose built to answer a specific question will nearly always succeed.