The fallacy of defect prevention

0

One of my least favorite phrases in the data quality industry is “getting data right the first time, every time.” It’s not that I disagree with the premise of defect prevention. Even though it’s impossible to truly prevent every defect before it happens, defect prevention is highly recommended because the more control enforced where data originates, the better the overall quality will be for enterprise information.

However, a major problem with defect prevention is assuming that defects are easily detected and can always be known a priori (i.e., not dependent on experience using the data in any business context) when, in fact, many defects can only be detected a posteriori (i.e., dependent on experience using the data in a specific business context), such as data defects causing a failure in a business process.

According to ISO 8000 standards, when it comes to data quality (more specifically, data accuracy) there are only assertions. You need to use data to test its assertions before you can determine its quality. Furthermore, quality is never achieved as a permanent state. Instead, it's always in flux because of the universality of change. Even data that is defect-free today could be considered defective tomorrow. Therefore, assertions of data quality must be continually reasserted.

And, as Dylan Jones recently blogged, poor quality data isn’t a pitfall, but an opportunity to learn. He recommended transforming the way you approach bad data by finding the hidden story, tracking problems back to their source and asking questions of those who created the data. This story about the people, processes and technologies involved in creating poor quality data can often create considerable opportunities for increased profits, morale and operational efficiency.

Of course, those learning opportunities will often lead to implementing new - or strengthening existing - defect prevention procedures. Just don’t equate defect prevention with defect elimination — that is the preventable defective reasoning that I refer to as the fallacy of defect prevention.

Share

About Author

Jim Harris

Blogger-in-Chief at Obsessive-Compulsive Data Quality (OCDQ)

Jim Harris is a recognized data quality thought leader with 25 years of enterprise data management industry experience. Jim is an independent consultant, speaker, and freelance writer. Jim is the Blogger-in-Chief at Obsessive-Compulsive Data Quality, an independent blog offering a vendor-neutral perspective on data quality and its related disciplines, including data governance, master data management, and business intelligence.

Leave A Reply

Back to Top