"One size doesn’t fit all" is a well-known refrain in the data governance community. Typically, this well-worn but evergreen adage is applied when discussing organizational structures. Two companies in the same industry, of like size and means, with similar objectives can take drastically different approaches for instantiating data governance within their organizations. Culture, organizational maturity and incumbent practices all influence the shape of the program to come.
But the adage applies to more than just the organizational structure and dynamics of data governance. Successful data governance programs right-size not only how data decisions are made, but associated data policies, practices and procedures as well. Which is, of course, what makes data governance so difficult – and fun.
When assessing the fit of your data governance practices consider the following fallacies:
All for one and one for all.
When determining decision rights, the first step is often cataloguing all data creators and consumers. But in the case of customer or product data, this will include every function and process in the organization. There isn’t a conference table big enough, or timeline long enough, to bring everyone to the table every time. Much less to agree on anything. Instituting data governance requires some hard decisions about who gets to decide and who doesn’t.
Same data, same policy.
Traditional data policies often take a blanket approach to security, access and privacy. For example, customer data is segmented into discrete categories: confidential, private, public and so on. Each category has discrete data protection and access rules that apply to all systems and processes equally.
Today, however, we recognize that it’s just not that simple. Data privacy, security and access policies must address not just the content of data but the context of use. A multi-dimensional approach ensures that data is available for multiple purposes while balancing the access versus risk equation. In this way, organizations can enable unfettered discovery (the hallmark of forward-thinking analytic projects) within tightly controlled environments without opening the flood gates and sacrificing security and privacy in a broader operational context.
Once a rule always a rule.
Once and done? Not so fast. As business practices change, so must data governance policies and rules. As an example, several clients – particularly in the public sector – point to legacy policies that prohibit access to and dissemination of data at the same time open data initiatives are being championed. Developing clear pathways for communicating, evaluating, updating and even sunsetting established data policies and rules is critical.
All data is created equal.
I have not met an organization yet that has a dearth of data issues. But where to start? With unlimited time and budget all data would be pristine and managed impeccably. To state the obvious: this is just not the case.
As a result, data governance must be responsible for creating a balanced data budget: ensuring that all data is managed in accordance with its strategic importance and value. Done right, data governance creates a corporate agenda for data that establishes data priorities and ensures that associated investments (technology and skills) are optimized.
An A is an A is an A
In grammar school grades were based on clearly defined and inviolate thresholds: A = 100-90, B = 89-80 and so on. When it comes to grading our data the equation is not so clear. In the case of data quality what constitutes “fit for use” can fluctuate wildly. There are circumstances where 50 percent data completeness is good enough. And others where 100 percent accuracy is the name of the game. The criteria for a green light (an A) on the associated data dashboard will be different.
The investment given to the care and feeding of these different elements should be apportioned accordingly. Can’t make the case for how improving the data will increase operational efficiency, enable strategic objectives or reduce risk? See the point above.
One method to rule them all
Not only can we not apply the same grading scale to all data, the same data management methods and mechanisms may not apply either. Prior to big data, companies often applied (or intended to apply) unilateral methods for data quality, metadata management and so on. But as organizations dive into different data pools and usage models, different methods can be required.
For example, the mechanisms for assessing and addressing data quality may differ for data sourced from internal operational systems versus social media data or other content acquired from third-party sources. For the former, established “small data” quality practices focusing on data correction apply. For the latter, data augmentation may be more appropriate to address identified deficiencies or gaps. In both cases measurement is required to establish a level of confidence in the data.
Everyone shall comply
Or shall they? Consider the data governance chicken and egg: we don’t have a sanctioned data policy because our systems aren’t compliant. Our systems aren’t compliant because we don’t have a sanctioned policy. The issue? An expectation of blanket compliance. Overnight.
When creating policies and rules an execution plan must exist to address when, how and even if (for special cases) compliance will be achieved. Incrementally as updates are made to systems (creeping compliance)? As a discrete program or project? Other?
Note bien: A waiver is a sanctioned exception to the rule. Most often applied to legacy systems or processes soon to be sunset, or where the cost and time to correct outweighs the perceived risk or overhead noncompliance creates. Processes and applications that do not meet established criteria should not be given a waiver in lieu of a plan to become compliant.
Interested in learning more about creating a right-sized, sustainable data governance program? See our SAS whitepaper Sustainable Data Governance.