David Loshin examines various aspects of data governance that are essential for regulatory compliance.
Reconsider conventional assumptions about data governance – three suggestions for chief data officers.
How should a data trust process work? David Loshin elaborates.
David Loshin provides an alternate take on streaming data in the context of legacy systems.
David Loshin says entity resolution isn't a bandage to fix errors – it should be part of your data strategy.
In the extended enterprise, data integration challenges abound. David Loshin explains.
David Loshin says the hardest part of compliance is knowing if a data asset contains personal data, and ensuring you can protect it.
David Loshin describes some steps you can take to ensure that self-service data preparation improves collaboration.
David Loshin explains how to set up a data catalog that will help you get more value from a data lake.
David Loshin says simple approaches to identity resolution may not scale on a big data platform as data volumes increase.
David Loshin explains 4 struggles of syndicating master data across the enterprise.
David Loshin explains why MDM is such a valuable tool in helping to detect fraud.
David Loshin extends his exploration of ethical issues surrounding automated systems and event stream processing to encompass data quality and risk considerations.
David Loshin describes three sets of policies required for ensuring compliance with data protection directives for health care.
Health care fraud prevention is a sticky topic. David Loshin discusses what's needed to balance prompt claims payments with fraud prevention efforts.
I've been working on a pilot project recently with a client to test out some new NoSQL database frameworks (graph databases in particular). Our goal is to see how a different storage model, representation and presentation can enhance the usability and ease of integration for master data indexes and entity
As the application stack supporting big data has matured, it has demonstrated the feasibility of ingesting, persisting and analyzing potentially massive data sets that originate both within and outside of conventional enterprise boundaries. But what does this mean from a data governance perspective?
In my last post we started looking at the issue of identifier proliferation, in which different business applications assigned their own unique identifiers to data representing the same entities. Even master data management (MDM) applications are not immune to this issue, particularly because of the inherent semantics associated with the