You've just spent days working on your presentation to senior management. You have collected your data from various sources, loaded into your analysis application, run your stats and generated your graphs. Those graphs have been carefully cut and pasted into your report and inserted into your visual presentation. One last thing to check, though: is that report accurate?
There are few events that can stop a meeting's momentum like being challenged on your reported numbers mid-slide. When questioned about the veracity of a single number, if you do not have an adequate response, it calls every number and conclusion into question, effectively derailing your discussion. And because you never know which number is going to be challenged, you are basically at the mercy of the processes employed to provide the data you used for your report.
To prevent the question of accuracy from scuttling the discussion and to get the conversation back on track, there are two basic issues that must be addressed. The first is establishing a frame of reference for a working definition of “accuracy,” which is critical to effectively baseline data usability. The second is providing metrics for accuracy that can be used to measure and compare the data to specified levels of expectation for accuracy.
The definition and the metrics for accuracy are the starting point for enabling a lucid response to any challenges about the validity of the report. Having both at your fingertips goes a long way in refuting any challenges in a meaningful and rapid way, and in our next set of blog posts we will look at establishing the context of measurable data accuracy.