Generally speaking, fraud analysis embraces two different processes: discovery and detection. Discovery typically analyzes a blend of historical transaction and demographic data to look for outlier patterns that might indicate suspicious activity. Further investigation of these outlier patterns helps analysts determine if the activity is benign or if it indeed represents fraudulent behavior (and the extent of that behavior's economic impacts). The result of this process is one or more fraud patterns, which are then integrated into the detection process. Detection generally scans sequences of events looking for matches of known suspicious patterns and flagging one or more transactions when a pattern is detected.
One of the main challenges of fraud analytics is rooted in a fundamental aspect of the “fraud business.” Namely, fraud perpetrators are constantly looking for ways to bypass the methods organizations use to detect it. One example is the $10,000 rule of the Bank Secrecy Act, in which a financial institution must report any cash transaction over $10,000. A simple detection rule would monitor all transactions – and if a person deposits $10,000 or more in cash, it would automatically be reported to the IRS, as it might be indicative of some money laundering scheme.
But anyone who knows that detection rule would consider dividing the cash into two bundles of $4,000 and $6,000. Then they could make two deposits and (supposedly) elude detection. Of course, banks are smart enough to know that someone might try this trick. So they put controls in place to look for multiple cash transactions from the same individual over a defined time period. In fact, the rule talks about whether it is one lump transaction or a collection of smaller transactions over a 12-month period.
That being said, most fraud schemes are much more complex than this. They often involve many transactions executed by multiple “named individuals” colluding to commit their crimes.
The role of master data management in fraud
It's easy to suggest that master data management (MDM) can prevent fraud by linking records together to determine that two transactions purported to have been done by two different people are actually linked to a single individual. But fraudsters are not stupid. And the good ones are not going to expose themselves to those kinds of risk.
Yet even when a collective of criminals act together to attempt to obfuscate their illegal activity, multi-domain MDM can support the fraud analysis and detection processes. The issues lie less in linking two records to the same belly-button and more in exploiting transitive relationships to identify non-obvious links.
Think about those crime shows and movies where the police, as part of their investigations, develop a visual map that links individuals, locations, etc. They're establishing connectivity among uniquely identifiable entities. That's what MDM can do through resolution of entity records to unique identities, by assigning unique identifiers and – most importantly – by documenting the relationships among different entities.
As an example, consider a common health insurance fraud scheme in which a health care provider submits claims for services supposedly rendered to a collection of individuals using their (stolen) health insurance member IDs. Each fraudulent claim links a uniquely identifiable provider to a collection of uniquely identifiable members at a uniquely identifiable location.
Individually processed, the insurance company might not think twice about paying those claims. However, an MDM system can be queried to determine if multiple claims are submitted for the same member at distinctly disparate locations (yes, that really happens). If it does happen, the MDM system will flag those claims as suspicious. Another similar example would be flagging it suspicious when a uniquely identified provider submits claims for services provided at the same time but at different locations. And yes, that also really happens.
Fraud detection is a fertile area for exploiting MDM’s ability to resolve duplicate entity data and establish the relationships among resolved identities. The trick is not just capturing that data – it's also enabling analysts to easily search, retrieve and browse that data. My next post will look at some of the challenges in syndicating master data for effective use.Download a TDWI checklist report: 7 tips for unified master data management