Many bloggers, perhaps most notably Henrik Liliendahl Sørensen, have used cooking as an analogy for the data quality challenges associated with the preparation and delivery of the information an organization relies on for its daily operations and decision making.
The analogy assumes that using better-quality ingredients will always create a better meal. This is similar to the assumption of quality that believes quality business decisions can only be made based on quality data. If that were true in all cases, every business would be bankrupt.
Without question, data quality is important - but also important are the skills of the person using the data; just as food quality is important, so are the skills of the person preparing the meal. You could give me, for example, the highest quality ingredients, but I would still cook the absolute worst meal you would ever eat — because I am a truly terrible cook.
It’s also important to remember that even great food made with the best ingredients can still give you indigestion, which is analogous to when high-quality data supports a high-quality decision that still creates a bad business result.
And fast food, which uses neither quality ingredients nor cooking skill, can still satisfy your hunger, which is analogous to when data of questionable quality supports a data-driven decision, or perhaps when an intuition-driven decision is used instead, that still created a good business result.
Sometimes I think too many of us are Data Foodies (i.e., snobbish lovers of high-quality data) adoring the Iron Chefs of data governance while conveniently ignoring the times when less-than-perfect data created perfectly good business results.
And in the new digital age, big data may be the new fast food, since as Viktor Mayer-Schonberger and Kenneth Cukier explained in their book Big Data: A Revolution That Will Transform How We Live, Work, and Think, it requires a willingness to embrace the usefulness of data’s real-world messiness.