Managing information policy compliance to prevent fraud


Our company has done some work associated with understanding and preventing health care fraud. We've been consulting with one client on a master data management program that focuses on provider data and how characteristics of providers and relationships among them can be leveraged to ultimately look for fraud patterns. In another situation, we worked with the client to understand the types of real-time fraud analytics algorithms and applications that can be integrated into an investigation workbench that helps analysts quickly assess potential fraud scenarios.

The estimates of the costs of health care fraud range from a conservative $68 billion to as high as $230 billion (and possibly more). Health care fraud prevention is a sticky topic, though, as it must strike a balance between two diametrically opposed outcomes.

The first is the desire to streamline claims submission and payment processing so providers will be paid promptly, which encourages their ongoing participation in the health care network. The second is the need to continuously screen and filter claims to find potentially suspicious submissions and prevent payment of fraudulent claims. If you are too ambitious in ensuring prompt payment, you may be limited in how much screening can be done, since the time to screen will impact fast payment. Alternatively, the more you screen, the better chance you have of finding and preventing payments to those committing fraud.

So it sparked my interest recently when I learned about a relatively new public-private initiative called the Healthcare Fraud Prevention Partnership (HFPP). The HFPP links federal government, state agencies, private health insurance payers, law enforcement and anti-fraud associations in a partnership that will devise approaches to detect and prevent health care fraud through a combination of information sharing and data analytics. One of the HFPP's goals is to develop a means for data exchange between government agencies (such as The Centers for Medicare and Medicaid Services, or CMS) and private companies. The approach calls for using a trusted third party to create a data repository suitable for both applied analytics and/or querying as a way of identifying potential fraud patterns and networks.

I'm inferring that last bit from my interpretation of what I read in a flyer on the HFPP website. It describes the benefits of the HFPP:

  • Enhanced analytics using CMS data. The HFPP is the only organization through which partners can combine their data with CMS data to gain heightened anti-fraud insights.
  • Expanded research. Partners inform study criteria and design for maximum impact.
  • Confidentiality. A trusted third party enforces the security and de-identification of partner data. No partner – public or private – has access to the data of other partners.

These benefits imply some fundamental facets of what this partnership is intended to do:

  • The partnership intends to combine data from the different members with data from CMS and apply fraud analysis algorithms.
  • There should be an independent organization to handle data ingestion and management.
  • There need to be methods to link data about unique entities within this third party’s data repository.
  • There must be clearly defined policies for data protection to ensure against inadvertent data leakage.

This suggests a comprehensive shared master data environment populated with data from a large number of contributors, to be used for common purposes and governed by a set of compliance rules designed to prevent any of the participants from seeing information that does not belong to them. In fact, to be sure that there are no risks of inappropriate data exposure – before any of this system is designed or built – those data protection policies must be defined and approved with a plan for operationalization as a key part of the data repository.

I'll share some more ideas in my next post.

Learn how SAS can help you predict fraud and prevent loss

About Author

David Loshin

President, Knowledge Integrity, Inc.

David Loshin, president of Knowledge Integrity, Inc., is a recognized thought leader and expert consultant in the areas of data quality, master data management and business intelligence. David is a prolific author regarding data management best practices, via the expert channel at and numerous books, white papers, and web seminars on a variety of data management best practices. His book, Business Intelligence: The Savvy Manager’s Guide (June 2003) has been hailed as a resource allowing readers to “gain an understanding of business intelligence, business management disciplines, data warehousing and how all of the pieces work together.” His book, Master Data Management, has been endorsed by data management industry leaders, and his valuable MDM insights can be reviewed at . David is also the author of The Practitioner’s Guide to Data Quality Improvement. He can be reached at

Leave A Reply

Back to Top