Data Governance by a Standard Data Model for Insurance

 

Data Governance

Using a standardized data model is an essential condition to achieve data governance in an enterprise. A standard data model supports data governance processes by implementing industry standards wherever possible:

  • standards for contract and claims representation,
  • mapping of data content with standard definitions (glossary function),
  • use of code attributes instead of free text,
  • mapping of standard and customized code, definition of arbitrary code hierarchies.

Read More »

Post a Comment

How social brokers is changing insurance

three musketeers“All for one and one for all” is best known as the motto from “The Three Musketeers”, but this phrase could easily sum up the growing trend in social brokers.

With advanced analytical techniques like generalized linear modeling insurance companies have created more granular pricing structures. But despite the assertions of “customer segmentation of one” or individualized pricing most insurance company continue to aggregate risk. If an individual then falls outside those defined policy limits or has pre-existing conditions the insurance company often declines the coverage. For example, an insurance company may not offer coverage if your property has a thatched-roof or travel insurance if an individual has an existing heart condition.

Social brokers are a new type of online intermediaries that helps individual overcome these challenges. It works by these brokers using internet search and social media to identify segments with poorly-served insurance requirements or groups of customers with similar needs then negotiates insurance on behalf of the group.

Social brokers are seen as a win-win for both customers and insurance companies.

Customers that use social brokers can experience substantial savings. The market leading social broker Bought by Many (BBM) claims that its customers save, on average,  19 percent by using its collective buying powers.

Social brokers allow insurers to diversify its product portfolio by enabling them to write business in sufficient quantities it previously would have declined. The more diversified an insurer’s book of business then the less capital it needs to hold to comply with regulations like Solvency II. In today’s financial environment this is of considerable benefits to insurance companies. Also, research shows retention rates for policies written with social brokers is higher than with traditional distribution channels. Another benefit for insurance companies.

For an industry that is often seen as laggards and risk-adverse. Social broking is a great example of how insurance companies of all shapes and sizes are finding innovative ways to use data to increase revenue. To learn more about how analytics can help insurance marketing departments focus on customers, download the white paper “Customer Intelligence in the era of Data-driven marketing”.

I’m Stuart Rose, Director, Global Insurance Practice at SAS. For further discussions, connect with me on LinkedIn and Twitter.

 

 

Post a Comment

10 simple steps to detect more insurance fraud

FraudOver the years I have written many blogs about insurance fraud including those on anti-money laundering, data quality in fraud, anti-fraud technology, life insurance fraud and even ghost broking.  It’s clear that insurance fraud comes in many shapes and sizes and as losses continue to grow, detecting and preventing fraud is consistently ranked among the top three priorities for insurance executives.

Unfortunately it is impossible to predict trends in fraudulent activities, it’s obvious that there is no one bulletproof fraud detection technique. However, insurers that adhere to the following 10 steps offer the best chance for detecting both opportunistic and organized fraud.

  • Step 1: Gearing up for data management
  • Step 2: Visualizing data
  • Step 3: Harnessing business rules
  • Step 4: Searching databases
  • Step 5: Detecting anomalies
  • Step 6: Delving deeper with predictive modelling
  • Step 7: Realizing the value of text analytics
  • Step 8: Identifying organized fraud through social network link analysis
  • Step 9: Managing and triaging alerts
  • Step 10: Knowing your deployment options

One organization that followed these ten steps was Czech Insurer insurer, Česká pojišťovna. Using the SAS Fraud Framework for Insurance solution they were able to uncover fraud to the tune of more than 20 Million Czech Crowns ($850k) annually. Read more about the Česká pojišťovna case study.

Data and analytics can help with the detection of fraudulent activity and patterns, but claims adjusters and special investigation units (SIUs) will always be required to turn analytical insight into useful results.

To learn more about the ten steps to detecting and preventing insurance fraud, download the white paper “Simplifying insurance fraud analytics”.

I’m Stuart Rose, Director, Global Insurance Practice at SAS. For further discussions, connect with me on LinkedIn and Twitter.

 

Post a Comment

Innovation in reinsurance – no longer an oxymoron

reinsurance - catastropheInsurance is a tough marketplace, but in many respects reinsurance is tougher! Today, the reinsurance industry is faced with an unprecedented number of challenges especially with what appears to be an increasing frequency and severity of man-made and natural catastrophes. To combat these challenges, reinsurers are turning to technology for catastrophe modelling, data analytics and geospatial information systems (GIS) to better understand the data and risk exposure.

It is a maxim that “data is the lifeblood for insurers”, but more often than not it’s not about how much data , it’s about how smart you are with the data. While it is true that a reinsurer has access to much less data regarding a particular reinsurance transaction than the ceding company does, it is also the case that reinsurers do have tremendous amounts of data in their possession. Yet many are not utilizing this data to the fullest extent. One exception is Munich Re.

One of the world’s largest reinsurers, Munich Re, is taking advantage of the large amount of data now available to the organization to innovate though a big data analytical environment from SAS. Using in-memory technology makes it possible for Munich Re to interactively analyze large quantities of data to discover unrecognized correlations and patterns. According to Munich Re’s CEO, Torsten Jeworrek, “With these new analytical technologies, we are considerably enhancing our global ability to combine our customers’ data with our own findings and expert knowledge”.

Reinsurance is very complicated, but continuing with the status quo and not full utilizing the new data sources available hampers a reinsurers ability to understand its business. Reinsurers that excel in analytics will have an advantage against their less-advanced competitors.

For more information about how analytics can help reinsurers, download the white paper “Art and Science of Reinsurance”.

I’m Stuart Rose, Director, Global Insurance Practice at SAS. For further discussions, connect with me on LinkedIn and Twitter.

Post a Comment

Advantages of a standard insurance data model

"BEST PRACTICE" Tag Cloud Globe (business process improvement)

In my first blog article I explained that many insurance companies have implemented a standard data model as base for their business analytics data warehouse (DWH) solutions.

But why should a standard data model be more appropriate than an individual one designed especially for a certain insurance company?

Read More »

Post a Comment

Back to basics

One of my colleagues often asks me “What’s new in insurance”. For an industry that is risk adverse, change does not come easily. In the past we have discussed innovations concerning telematics, drones, wearables devices and even weather data. However when he asked me last week and I responded that the newest topic is “Lemonade”, he looked at me with a very puzzled face. I then went on to explain the growth in peer-to-peer (P2P) insurance, especially insurance technology start-up companies like Lemonade and Friendsurance.

Peer to peerP2P insurance works similar to a mutual society. A group of members or customers, often friends or neighbors, form a small network that shares the risk. The members pay a portion of their premium into a common shared pool and another portion of the premium to an insurance company. Any claims submitted by these members are paid out of the shared pool. The insurance company acts as a reinsurer, paying out catastrophe claims or when the shared pool is exhausted. At the end of the year any funds left in the pool are either carried forward to the next year or shared amongst the members.

With the advent of social media and using analytics, especially tools like link analysis, it has become easier for customers to form groups based on common behavior or traits.

The simplicity of this type of insurance means there are many advantages to the members:

  • Members are unlikely to make fraudulent claims since they are run the risk of being ostracized by their fellow members.
  • Members know more information about themselves. Hence can self-select which risks to insure and which to exclude or reinsure.
  • Reduced premiums since administration and claims costs are estimated to be lower. In fact, Guevara, a UK P2P insurer, estimates that its members can save up to 80% on their premiums.

P2P insurance can lead to adverse selection with poor risks being excluded and transferred to the traditional insurance companies. Whether P2P insurance will revolution the industry, only time will tell. But at the moment Lemonade is not just a cool drink, but the hottest thing in the industry.

I’m Stuart Rose, Director, Global Insurance Practice at SAS. For further discussions, connect with me on LinkedIn and Twitter.

 

Post a Comment

Faster, easier, safer: a standard data model for insurance

Blog1_HTS_PIc1jpg

Nothing works today without an efficient data management – also in insurance business. A standard data model can be an important component of it. This article explains why.

“Make or Buy”? This question has been raised very often by insurance companies planning to introduce a consistent data structure – a data warehouse (DWH) - for different business analytics applications [1]. This data structure should integrate all lines of insurance business and provide a 360 degree view on all information of related business parties. The first task is to make a decision on developing an individual data model or buying a standard data model from a software vendor.

But what can be the ‘standard’ of a DWH data model for insurance business?

A standard data model for business analytics must comply at least with the following requirements:

  • It has to support the data requirements of many different business analytics applications.
  • It must comply with available standards and support all important and common insurance business processes.
  • It has to support the data governance processes and data quality management.
  • It must comply with regulatory rules - especially complete auditability.
  • It has to support project specific extensions and must be release capable.

The ‘Detail Datastore for Insurance’ (DDS) as core element of the SAS Insurance Analytics Architecture was released for the first time in 2004 in cooperation with a major insurance company. Since then it has been consequently enhanced and refined. Today it has been licensed by more than 60 insurance companies all over the world.

SAS has been working on business analytics data warehouse implementations together with insurance companies all around the globe for the past 30 years now. In the beginning the goal was always to implement an individual data structure for an insurance company within the given project requirements. Time by time SAS identified a lot of common structures within the different DWH projects. This powered the motivation for the development of a standard business analytics data warehouse with focus on providing data for different insurance business analytics applications. The application scope covered classical business intelligence applications like performance reporting as well as analytical scoring solutions to calculate 'customer value' components, customer intelligence solutions for campaign management and rate making applications for actuary departments of insurance companies. In the last years risk management for Solvency II and fraud detection completed the application focus.

Blog1_HTS_PIc2

It is important to mention that the DDS was not developed on the drawing board but out of field projects and has been validated in many DWH projects for insurance. The main focus was set to fasten the implementation of business analytics applications and increase their value. Available standards of the insurance industry with relevance for business analytics aspects are supported by the structures of the DDS. Many concepts of ACORD and GDV have been implemented in the data model. Besides one of the key aspects of development was to support data governance processes, data quality aspects and auditability.

Requirements of insurers in the Central European region have been used by SAS to enhance the data model and supply an extended Central European (CE) version of the DDS in addition to the 'core' global data model. This could be done very easily due to the powerful extension management concept and customization guidelines of the DDS that ensure release capability.

Important features of the SAS standard data model and related DWH concepts will be presented in following blog articles. E.g. explicit advantages of the SAS standard data model for insurance, data governance and data quality aspects, auditability, release capability and extensions for Central Europe (‘DDS Central Europe’).

Read more on SAS data model for insurance business: http://www.sas.com/en_us/whitepapers/data-is-king-105398.html.

Hartmut Schroth, Business Advisor data strategies for insurance at SAS Germany. For further discussions, connect with me on LinkedIn.

[1] We talk about a data integration layer for feeding different business analytics applications and not about   'enterprise data warehouse' nor ‘operational data store’.

 

Post a Comment

No expertise required…

How many of us have used the phrases…

  • It’s a piece of cake
  • Anyone can do it
  • It’s as easy as ABC
  • I could do it with my eyes shut

When it comes to business intelligence it should be “easy peasy” but for many organization it can still be a chore that takes days or even weeks to create a new report that senior management has requested.

In today’s dynamic environment, business users need access to all types of data, almost instantly. And they need more than standard, stale reports and executive dashboards. They need to be able to explore the data so they can discover new information and answer questions – not just about what happened, but also about why it happened.

self service BIEffective self-service business intelligence (BI) gives users access to all types of data in a way they can easily understand and use, making analytics approachable. It makes business users feel empowered to work independently when it comes to accessing data, generating insights and sharing information. With true self-service BI, business users can pull in data from different sources and combine it as they want for analysis and reporting. They can explore the data analytically, and use an intuitive interface to quickly build highly interactive and visual reports and dashboards. With SAS Visual Analytics users can select variables and drag and drop them to create a what-you-see-is-what-you-get type of report. Users can even combine things like pie charts and bar and line charts, along with crosstabs, tree maps and geographic data.

Belgium insurer, Belfius Insurance, understood the value of an agile BI platform. Using SAS Visual Analytics, business users now do not have to wait for an IT expert to create the reports for them. According to Steven De Wever, IT Project Manager at Belfius Insurance, “The results have been stunning. Business users can now filter and explore their data, drill down to every detail, surface and correct annoying abnormalities, all while discovering relationships they hadn’t even thought of before”. Read more about the Belfius Insurance case study

To use another adage, “It’s not rocket science”, today to gain value from data and analytics, no expertise is required.

I’m Stuart Rose, Director, Global Insurance Practice at SAS. For further discussions, connect with me on LinkedIn and Twitter.

Post a Comment

What is big data for insurance?

IBig Datan a recent blog I wrote about how big data is a game changer for the insurance industry. But the question that is often asked “What is big data”?

Many people associate big data with the 4 V’s:

  • Volume – The sheer size of data that is produced.
  • Velocity – The speed with which carriers generate data.
  • Variety – The different forms of data that is created.
  • Veracity – The quality of data and information

 

However when I think of how big data is changing insurance I break it down into three categories:

  • Unstructured data – The insurance industry has always been heavily paper based industry. Whether it is adjuster notes, police reports, medical records and underwriter information. Until recently it has been difficult for insurance companies to analyzer that data. With new technologies like text mining and sentimental analytics, insurers are beginning to gain useful information from this previously inaccessible data.
  • External data – in the past insurance companies have been focused on their own internal data. The data created and stored in their transactional systems like policy administration, claims management, billing and agency portals. Today insurance companies are supplementing that data with external third party data. Whether it be credit scoring, government demographic data even geo-spatial data like weather information and google maps.
  • New data – this is data that previously was not even imagined 10 years ago. Social media data, telematics data – information from in-car data recorders to monitor driving behavior and the now the potential from the Internet of Things.

Data has always played a critical role in insurance. The insurance business is built on analyzing data to understand and evaluate risk. There is no doubt that big data era will impact insurance industry. Data on its own will not transform the industry it’s the insights from analytics that will give insurers new strategies and operational insights into their business.

To learn more about how to turn big data into insight, download the white paper “Return on Information: The new ROI.”

I’m Stuart Rose, Director, Global Insurance Practice at SAS. For further discussions, connect with me on LinkedIn and Twitter.

 

Post a Comment

Big data – game changer for insurers.

Big data - game changerA recent survey by Capgemini found that 78% of insurance executive interviewed cited big data analytics as the disruptive force that will have the biggest impact on the insurance industry.

That’s the good news. The bad news is that unfortunately traditional data management strategies do not scale to effectively govern large data for high-performance analytics.

To overcome this obstacle leading insurers of all types and sizes are harnessing the power of big data by deploying a Hadoop data environment and combining that with high-performance analytics.

Hadoop has caught the attention of many organizations searching for better ways to store, process and analyze large volumes and varieties of data from multiple internal and external sources.
The advantages of Hadoop are:

  • It’s cost-effective – Hadoop uses lower-cost commodity hardware to reliably store large quantities of data
  • It’s scalable – Hadoop provides flexibility to expand out by simply adding more nodes
  • It’s adaptable – Ability to store any type of data in its original format

One insurance company that recognized the power of big data, but realized that its existing infrastructure wasn’t adequate to make use of the growing and diverse data volume was Markerstudy, a rapidly expanding UK insurer. In 2014, Markerstudy embarked on a big data project and decided to implement Cloudera Enterprise Data Hub environment with SAS Analytics. The initial focus for this project was to enhance the performance and capabilities of its rating engine however the project quickly expanded to include new data sources to benefit other areas of operations including claims fraud, and customer insight (gaining a 360-degree view). The success of the project was more or less instantaneous with a ROI within 5 months. Read more about the Markerstudy Case Study.

In addition, to learn more about how other insurance companies are using big data for competitive advantage download the SAS and Cloudera white paper “Driving Growth in Insurance with a Big Data Architecture”  or view the on-demand webinar with Markerstudy CIO, Dan Fiehn, talking about how Markerstudy are transforming their business with SAS Analytics and Cloudera’s Enterprise Data Hub.

I’m Stuart Rose, Director, Global Insurance Practice at SAS. For further discussions, connect with me on LinkedIn and Twitter.

Post a Comment