Underwriters offer a critical skill to the insurance industry – making decisions. The journey they take to arrive at a decision to accept, reject or modify a risk truly represents the unique nature of the underwriting discipline.

The power of today’s artificial intelligence holds the potential to usher in a new age of underwriting. But as the saying goes, you can’t know where you’re going if you don’t know where you’ve been. Let’s take a step back to the era before artificial intelligence and agentic AI.

Reflections from an underwriting career

In my early career, I enjoyed the distinct privilege of underwriting the then-largest territory (by PIF, or policies in force) at my Fortune 100 company. An army of insurance professionals supported the operation and worked tirelessly to gather information, build a policy into our administration system, screen the business and secure documentation from the customer.

There was a time during which at least one person touched every single homeowner's policy written. Sometimes, as many as five people. These were the steps:

  • An agent collected the customer information.
  • A processor “processed” the policy on our green screens (a DOS-based system circa the 1980s).
  • A screener reviewed to ensure requirements and standards were met.
  • A field inspector examined the property.
  • Eventually, it landed on the underwriter’s desk for final review.

This wildly inefficient process was the industry standard. You may laugh, but this is how business was done.

AI and underwriting rules

One day, underwriting rules were introduced – these algorithms only referred business to a person if certain conditions were met. Conceptually, executives began talking about “bypass” rates and setting targets to reduce policy acquisition costs (sound familiar to anyone else?). As results improved or declined, rule parameters were altered to boost profitability or decrease expenses.

This unsophisticated and somewhat crude approach represented the foundation of what we expect technology to do today – for good or bad. The AI models that were eventually developed based on these rules fall comfortably into the expectations we see for all models: As George Box famously stated, “All models are wrong, some are useful.”

What did he mean by that statement? Essentially, as Rick Wicklin says, Box wanted to emphasize the notion that a model of reality is different from reality.

Countless times in the early days of underwriting rules, I found myself in arguments with colleagues about “what the machine did” and “what the machine bypassed.” Well, of course it did! Any model can make a mistake, just like a human. The difference is that we never expect a human to be 100% accurate, but we don’t give such consideration to the machine counterpart.

This hubris toward madness leads.

Related: Agentic AI for insurance underwriting: The what, why and how

How will underwriting processes change with agentic AI?

A basic programming exercise involves outlining “How to make a peanut butter and jelly sandwich.” For a person, making such a meal doesn’t seem terribly complicated (unless you have kids). But for a machine, this pedestrian task reflects dozens of individual steps.

By understanding how machines think (and the role of machine learning), we can begin to evaluate how AI agents can help us with terrifyingly complex workflows, like underwriting. Even the most “vanilla risk” may require a thousand or more steps in a process (can I get an amen from anyone who’s worked on a process map?). And it can involve multiple people or teams.

If organizations commit to fully understanding their processes first, then determining where it’s best for an AI underwriter versus a human underwriter to own a task, they could see drastic increases in throughput. Without adding a single headcount.

According to a survey by Microsoft, 45% of leaders prioritize expanding team capacity with digital labor in the next 12 to 18 months, while 47% prioritize upskilling their existing workforce. Even though a third of leaders are considering headcount reductions, new roles are also emerging.

Which tasks should AI agents (or humans) handle?

So, how do you decide what an AI agent handles versus what a human underwriter handles?

Consider a common practice of insurance companies – that is, developing “underwriting authority grants” (i.e., documents that clearly indicate to what extent a certain level of underwriter can approve a level of coverage or type of policy). These controlling documents allow each company to maintain its risk appetite.

A similar approach could be codified into a company’s code of conduct. For example, The Hartford publicly refers to its responsible AI principles in a Code of Ethics and Business Conduct. This document outlines the expectation for all content produced by generative AI (GenAI) to be reviewed. Each employee is responsible for using the technology ethically.

If a chief underwriting officer had a team of AI agents and human underwriters at their disposal, any decisions rendered by the staff (human or AI) should fall within the boundaries of the organization’s governing documents.

Learn more about AI ethics: What it is, how it has evolved, and why it matters

Pick the right tasks for agentic AI underwriting processes

Gartner predicts “Over 40% of agentic AI projects will be canceled by the end of 2027.”

Why is this?

For starters, organizations could be picking the wrong things to “agentify” (that’s not a real word, but all words are made up anyway). Ask yourself:

  • What is AI good at?
  • What are people good at?

As Thorsten Hein described it, AI agents should own the routine, the repetitive, the mundane. But step 1 should be to truly understand your processes.

Your goal should not be 100% automation – AI agents should not handle everything. You want some great advice here? Put together the right mix, whether it’s 60% AI and 40% human or some other ratio. The bottom line is that AI should only handle what it does best, so people can focus on the rest.

Finally, and maybe most importantly, resist the urge to launch a “moonshot” AI use case. Reducing clicks can save millions of dollars (not sexy, sure). Here’s a great article on a process to quantify the ROI.

The dawn of agentic AI for insurance

It would be a mistake to expect perfection at the beginning of agentic AI. F(AI)LURE can be a great thing. Some of the worst decisions I ever made on the underwriting desk offered the greatest opportunities for growth.

It may seem counterintuitive, but agentic AI adoption begins with a discrete focus on the humans in your organization. Make no mistake, failed technological adoption is inherently a people problem.

Just as I grew, coached others, and applied my learning, I’d challenge you to consider the following wisdom:

  • Hire for character, not for skill. Managing AI will take people willing to work alongside the technology in a way that invites trust in that technology, just as you’d want a person to trust the team around them to work toward a common goal. I can teach someone prompts and coding. Coaching character is fundamentally harder.
  • Continuously develop and learn. What’s true today will change tomorrow. Eight years ago, I wrote an article on development in which I noted, “Development represents our most noble countermeasure for change.” Create space for your staff to learn so they can assimilate change faster.
  • Focus on quality. You have programs to check people’s work. Make sure you’re cultivating robust AI governance as well.

Now, challenge yourself to apply the same mindset to working with AI agents. If you treat them as a part of your team, the outcomes can meet or exceed those you’d expect from any high-performing team.

Looking ahead: The adventures of bionic underwriter and AI underwriter agent

The costs associated with rogue AI have been widely shared (e.g., €35,000,000 or 7% of worldwide annual turnover for the preceding financial year, whichever is higher). Every carrier will be subject to market conduct evaluations, litigation and brand damage if their AI agents make bad decisions.

Should this stop us from developing these capabilities? Of course not!

Plenty of evidence supports the concept that having access to AI systems and tools and working alongside AI agents boosts employee retention, creates a draw for early career professionals and can start filling the insurance talent gap.

Insurers already shoulder the risk of fines and damages during the normal course of business. The era of AI agents simply represents a new type of risk assessment for insurance companies to undertake. The decision to manage the decisions of AI agents reflects a natural evolution of the business.

It doesn’t matter what your business is – we are all in the business of AI. It’s no longer a question of if underwriting will adopt AI (agentic or otherwise). It’s just a question of when and how fast.

Learn how the business decisions you make every day can be the foundation for success with agentic AI. 

Download the e-book

Share

About Author

Franklin Manchester

Prior to joining SAS, Franklin held a variety of individual contributor and people leader roles in Property and Casualty Insurance. He began his career as an Associate Agent for Allstate in Boone, NC. In 2005, he joined Nationwide Insurance as a personal lines underwriter. For 17 years at Nationwide, he managed personal lines and commercial lines underwriters, portfolio analysts, sales support teams and sales managers. Additionally, he supported staff operations providing thought leadership, strategy and content for sales executive offices.

Leave A Reply

Back to Top