Life insurance is hard to sell. People don’t like talking about it because it draws attention to a topic no one wants to consider for themselves.

Death.

Life insurers have earned a reputation for laborious and intrusive processes to secure the coverage that everyone needs. So, over the years, they have innovated in reducing processes, making those processes less intrusive and offering appeals like “guaranteed acceptance” or “no medical exam.”

In one training early in my career, a consultant was pitching to a sales team a new policy that included just one question with a guaranteed issuance and no medical exam. While a stripped-down version of a complete life insurance product, the appeal was immediate: something that can be sold quickly to make quota.

It’s no surprise insurers are looking to AI to streamline the life insurance sales cycle further. AI, like facial recognition (a form of computer vision), can speed up the acquisition process and double down on the old standbys of “no forms to fill out” and “no medical exam.”

Facial analytics in underwriting: A risky proposition

What happens when the introduction of artificial intelligence carries forward the bias life insurers have created?

In one article, “Using Facial Analytics in Underwriting,” Dr. Karl Ricanek suggests the use of facial analytics to capture health signals to identify “characteristics associated with risk factors and lifespan.” He further suggests that taking a selfie can and will precisely capture metrics like BMI in the underwriting process.

While intriguing, this idea presents serious risk for insurers looking to deploy AI responsibly. Historically, facial recognition has performed poorly with non-white populations, compounding the potential for discrimination.

The troubling legacy of BMI in life insurance

The Metropolitan Life Insurance Company introduced the predecessor of today’s Body Mass Index (BMI) during World War II. In 1943, the 155-year-old organization sought a way to classify individuals by height and weight. The resulting tables of “ideal” weights “became the national standards for ’ideal’ body weight.” And over the last 80 years, health professionals and health organizations, including the American Medical Association, have recognized the inherent bias in using BMI: its historic harm, racist exclusion and loss of predictability when applied to an individual.

Author Kells McPhillips highlights the many issues associated with body mass index: how it favors a traditional European body type and misses factors such as muscle mass and bone density. Understanding and ultimately classifying an individual’s health and fitness, and therefore mortality based on BMI, neglects their national origin, race and diversity.

In short, BMI is racist.

The processes that life insurers seek to replace with artificial intelligence will carry forward the BMI bias if BMI is used. Issuing a life insurance policy requires input from a person on their insurability and, therefore, their desirability using multiple data. Boiling the process down to a simple photo will perpetuate bias.

Insurers have already used facial recognition for potential use cases, such as identifying fraud. Infamously, Lemonade deployed an AI chatbot in 2021, claiming an ability to detect fraud through non-verbal cues. The resulting calamity unfolded on Twitter, leaving the organization accused of using pseudoscience (phrenology/physiognomy) and peddling “AI snake oil.”

Lemonade quickly deleted the Twitter exchange.

Navigating regulation and responsibility in AI

Perpetuating the potential use case of a selfie capturing BMI will have the same result. Fortunately, in some geos, including the EU, the AI Act proposes the use of AI systems for social scoring or facial recognition as “unacceptable” and therefore banned. However, facial analytics could be argued as acceptable in determining pricing. It’s a reasonable assumption to conclude that insurers are thinking through how this version of AI  can generate efficiencies and enhance the customer acquisition process. However, it could be critical to ensure any pricing models are “open box,” allowing those determining pricing to identify any biases before introducing those premium models using facial analytics into production.

In another article from Insurance Thought Leadership, Pendella Technologies CEO Bob Gaydos suggests that “AI just doubles down on what it thinks it knows.” AI knows what it's taught. If developers teach AI something without asking whether it should be taught, the AI will use that knowledge.

Giving AI, especially facial recognition software, the charge to identify and classify risk acceptability and desirability for life insurance using BMI (or other factors known to generate bias) will only perpetuate the biases that have been around in the industry for decades.

Download this eBook to learn more about the business ethics of AI and why organizations should embrace AI, among other approaches to establishing an AI strategy.

Share

About Author

Franklin Manchester

Prior to joining SAS, Franklin held a variety of individual contributor and people leader roles in Property and Casualty Insurance. He began his career as an Associate Agent for Allstate in Boone, NC. In 2005, he joined Nationwide Insurance as a personal lines underwriter. For 17 years at Nationwide, he managed personal lines and commercial lines underwriters, portfolio analysts, sales support teams and sales managers. Additionally, he supported staff operations providing thought leadership, strategy and content for sales executive offices.

Leave A Reply

Back to Top