Data quality monitoring: The economics of continuity

0

Within data quality circles there is often contentious debate surrounding the use of data quality software. Some practitioners believe that data quality software is a cost centre for the organisation that they can easily do without. They also believe that data quality software makes the organisation focus on repetitive, reactionary tactics instead of eliminating root causes.

There is another school of thought that identifies data quality software as one of the key enablers in accelerating data quality improvement across the organisation. Yes, data quality tools can be used to repetitively cleanse data, but they can also spot defects at source and help coordinate the resolution of defects across disparate teams.

While I agree that as an industry we need to focus more attention on root-cause prevention, I wholeheartedly agree with the second camp that data quality software can have a profound impact on raising the quality of information across an organisation. Nowhere is this more keenly felt than with the data quality monitoring function.

Data quality monitoring typically takes place after any preventative work has been completed. For example, let’s imagine that a data profiling activity discovers a bunch of issues stemming from a poorly designed application form. We then correct the logic of the application, improve the screen design and provide better instructions to knowledge workers.

Some practitioners believe that at this point there is no need for further data quality monitoring – this is simply adding costs to the organisation. I disagree. It is essential that data quality monitoring be left in place (but in a slightly different approach to conventional wisdom, as I’ll explain shortly).

Modern information systems are incredibly complex. There are thousands of moving parts involved and the operating landscape is changing daily. System changes such as data structure changes, software revisions, operating system updates and, of course, raw data changes means there are numerous opportunities for data to become defective. This is exactly why we need constant monitoring, because there are too many variables for us to lock down.

So, to manage all these variables and ensure they don’t impact your precious data, we need some kind of monitoring – but not just the standard data profiling kind. Simply knowing whether a table has the right distribution of unique values or a certain threshold of validity is not enough. We need to monitor the quality of data across an entire information chain or business process. Critically, we also need to focus on the impact of data on the business.

For example, one telecoms company implemented a major data quality program to improve their procurement process, saving them millions in their first year. They could have walked away and left the process having made considerable savings with the new system – but instead they left in place a data quality monitoring system. Several months later they discovered that their provisioning costs were increasing. They also noticed that certain products were not being returned to stores when the customer cancelled the order. A direct correlation between data and business impact was made and the situation remedied. A software developer had made an unauthorised change some weeks before, and this prevented cancelled equipment from being brought back for re-use, hence the cost increase.

This is the kind of continuous data quality monitoring companies need to invest in. They need to think holistically, not just about the data but the business environment and functions the information supports.

It’s relatively straightforward to achieve, and as my example demonstrates, the economics certainly stack up in the long-run. Whether it will satisfy quality purists I’m not convinced – but if it keeps the business optimised and cost-efficient, it gets my vote.

What do you think? Is data-quality monitoring a cost centre that breaks with traditional quality principles, or is it an essential data quality management activity in modern enterprises? Welcome your views in the comments below.

Share

About Author

Dylan Jones

Founder, Data Quality Pro and Data Migration Pro

Dylan Jones is the founder of Data Quality Pro and Data Migration Pro, popular online communities that provide a range of practical resources and support to their respective professions. Dylan has an extensive information management background and is a prolific publisher of expert articles and tutorials on all manner of data related initiatives.

Leave A Reply

Back to Top