Over the past set of posts we have looked at framing the definition of accuracy as consistent with an agreed-to system of record qualified by defined business rules. The last question involves ensuring that there are no points during any of the end-to-end information flows in which data flaws are introduced that violate any of the quality rules (both for the system of record and consistency rules for the data used for analysis).
Auditability of data accuracy must rely on instituting inspection and monitoring across the critical paths of the information flows. This is often done using defined rules implemented within a data profiling tool. In that case, any time a business process potentially modifies the system of record, a set of rules can be applied and a business rule compliance score can be generated. Practically, though, you might not apply the scoring process every time the data possibly changes – it may be more feasible to do it over a reasonable period (such as once every eight hours). If the score meets the levels of acceptability for the business rules, the system of record is trustworthy.
Similarly, prior to using data for analysis, the accuracy can be measured in relation to consistency with the system of record. Again, if the measured score meets the level of acceptability, the analyzed data is deemed to be trustworthy. Most importantly, each time the validation process is run you have a historical track record of the accuracy scores at different points in time. Any challenge to the accuracy of a report can be met (and easily dismissed) by showing the historical track record that demonstrates the data meets the agreed-to expectations.