There's no shortage of talk today about newfangled tools, technologies and concepts. The Internet of Things, big data, cloud computing, Hadoop, and countless other new terms, apps and trends have inundated many business folks over the last few years.
Against this often confusing backdrop, it's easy to forget the importance of basic blocking and tackling. Yes, I'm talking about good old-fashioned data quality, something that still vexes many departments, groups, organizations and industries. Without further ado, here are the five biggest data-quality mistakes that organizations routinely make.
Assuming that the IT department is responsible for data quality.
In the decade that I spent as a enterprise system consultant, this was one of my biggest pet peeves – and rants over beers with fellow consultants. Line-of-business employees would carelessly enter errant, duplicate or incomplete records with nary a regard for the implications of their actions. Yet, mysteriously, IT was supposed to know and cleanse this information. It never made sense to me, but I understood the "rationale." This is an extension of the IT-business divide, a topic I addressed in a three-part series not too long ago.
Ignoring it altogether.
On a project in the latter part of a year, I helped an organization implement a new HR and payroll system. Its data was, to put it mildly, a mess. Given that (as well as a cauldron of other issues), its desired activation date of January 1 was beyond optimistic. It was downright laughable.
Yet the organization's CIO didn't care. (It turns out that executive bonuses were pegged to that date.) We were going live on the first of the year come hell or high water, and issues would be addressed later – if at all.
Yeah, the CIO and I didn't get along too well. And this brings us to number three.
Promising to clean it up after a new system implementation.
There are two problems with this mind-set. First, it's always easier to clean up data without the limitations of applications' business rules. Many ERP and CRM apps by design enforce rules – and then there are audit considerations.
Second, sometimes tomorrow never comes. New projects, priorities and crises mean that those core data quality problems remain unchecked.
Thinking that "the cloud" will solve all data quality problems.
I laugh when I hear people say that "the cloud" will magically fix things. Nothing could be further from the truth. Make no mistake: Cloud computing can do many things, but magically transforming bad data isn't one of them.
Refusing to act on data because the data quality may not be perfect.
Data quality is rarely perfect even in relatively small datasets, never mind really large ones. In my travels, I've seen people afraid and/or unwilling to make relatively obvious business decisions because of the very possibility that a data quality issue exists.
I'm reminded here of the quote by the great polymath Charles Babbage: "Errors using inadequate data are much less than those using no data at all."
What say you? Got any others?