Much of the discussion around how to manage the advanced forms of artificial intelligence—machine learning, generative AI, large language models—deals with them only as technologies. This is a mistake.

Like any employee, AI must be onboarded to learn "how we do things around here." Advanced forms of AI have characteristics that set them apart from other technologies—including the fact that they can make recommendations and decisions on their own, that may not reflect corporate values—suggesting that insurers need to address this by applying standard human resource processes to advanced AI to keep them in line.

There are characteristics of these tools that require insurers to apply some of their traditional human resources tools to ensure adequate governance and to maintain an acceptable risk exposure.

Related: Adjusting to disruption and uncertainty in insurance has become the norm. Learn ways that your insurance organization can stay resilient amid these changes.

Advanced AI is different

The fundamental problem in treating advanced AI as only another technology is that these tools can:

  • Learn on their own.
  • Generate output on their own.
  • Make recommendations or decisions on their own, which may—or may not—reflect corporate values and also may create—or destroy—trust with customers and employees.

Unlike traditional technologies, AI can perform these activities without the direct involvement of a human. There is no programmer or manager to act as a stopgap, ensuring that corporate guidelines are being followed, that bias and discrimination are not present, that reputational risk does not take place, etc.

Insurers can address this by applying some standard human resource processes to advanced AI. Three examples are technical training, cultural norms, and performance reviews.

The importance of technical training

Like any employee, AI must be onboarded to learn “how we do things around here.”

There is a great deal of discussion around using vast quantities of data to train large language models, tapping unstructured corporate information sources, and bringing in third-party data in order to establish a base understanding of the business. However, for AI, training is not only standard system testing where test scripts are written and defined outputs are identified. AI technical training should also test for understanding, identify what inferences are made, and highlight answers that are correct but not desired.

Once initial training is performed, an AI training plan must also address the need for ongoing, continuous learning.

All of these objectives are addressed in a robust HR training approach. Their use needs to be expanded beyond humans and to AI applications.

Being aware of and understanding cultural norms

AI must be aware of, understand, and reflect the values of an organization in its outputs.

Just as employees receive regular reminders of “what our company is all about,” AI must also have this understanding to guide its actions. As humans know, giving an answer is only half the challenge; how it is communicated is the other. HR programs such as values training, nondiscrimination in the workplace, company history can help here.

Checking in with performance reviews

Once AI has been trained and imbued with the appropriate corporate “way,” it’s important to recognize that conditions change. AI activities must be monitored for quality results as well as to ensure that its output still meets current business needs.

AI activities must also be reviewed continuously to make sure that unwanted bias and discrimination are not part of its output.

Just as humans receive feedback on how they are doing, the best applications of AI will include some type of performance review approach.

Start on your AI journey

Insurers can bring together their IT technical team members and HR experts to exchange their knowledge and design possible applications of company-specific HR programs to advanced AI. Objectives of these sessions include:

  1. IT outlines how advanced AI learn and apply that learning.
  2. HR identifies the programs that they use to guide employees in their jobs.
  3. The group proposes where these two realms intersect, and proposes initiatives to include in the advanced AI development plan.

Enjoyed this? Check out the Insurance Analytics @ Scale video series

Share

About Author

Fitz Fitzgerald

Advisory Industry Consultant (Insurance)

Mike (Fitz) Fitzgerald brings extensive industry experience to his consulting role. Prior to joining SAS, he was vice president of enterprise underwriting solutions at Zurich North America, where he led the evaluation of technologies to support a new product development process. He held a number of leadership positions at Royal & Sun Alliance, including as field operations executive in the loss sensitive / global accounts division. His technology implementation experience includes the installation and maintenance of agency management, automobile policy administration, and workers compensation systems. In the 1990s, Mike led an initiative which delivered one of the first global online underwriting and claims learning platforms in the industry.

Leave A Reply

Back to Top