Much of the discussion around how to manage the advanced forms of artificial intelligence—machine learning, generative AI, large language models—deals with them only as technologies. This is a mistake.
There are characteristics of these tools that require insurers to apply some of their traditional human resources tools to ensure adequate governance and to maintain an acceptable risk exposure.
Advanced AI is different
The fundamental problem in treating advanced AI as only another technology is that these tools can:
- Learn on their own.
- Generate output on their own.
- Make recommendations or decisions on their own, which may—or may not—reflect corporate values and also may create—or destroy—trust with customers and employees.
Unlike traditional technologies, AI can perform these activities without the direct involvement of a human. There is no programmer or manager to act as a stopgap, ensuring that corporate guidelines are being followed, that bias and discrimination are not present, that reputational risk does not take place, etc.
Insurers can address this by applying some standard human resource processes to advanced AI. Three examples are technical training, cultural norms, and performance reviews.
The importance of technical training
Like any employee, AI must be onboarded to learn “how we do things around here.”
There is a great deal of discussion around using vast quantities of data to train large language models, tapping unstructured corporate information sources, and bringing in third-party data in order to establish a base understanding of the business. However, for AI, training is not only standard system testing where test scripts are written and defined outputs are identified. AI technical training should also test for understanding, identify what inferences are made, and highlight answers that are correct but not desired.
Once initial training is performed, an AI training plan must also address the need for ongoing, continuous learning.
All of these objectives are addressed in a robust HR training approach. Their use needs to be expanded beyond humans and to AI applications.
Being aware of and understanding cultural norms
AI must be aware of, understand, and reflect the values of an organization in its outputs.
Just as employees receive regular reminders of “what our company is all about,” AI must also have this understanding to guide its actions. As humans know, giving an answer is only half the challenge; how it is communicated is the other. HR programs such as values training, nondiscrimination in the workplace, company history can help here.
Checking in with performance reviews
Once AI has been trained and imbued with the appropriate corporate “way,” it’s important to recognize that conditions change. AI activities must be monitored for quality results as well as to ensure that its output still meets current business needs.
AI activities must also be reviewed continuously to make sure that unwanted bias and discrimination are not part of its output.
Start on your AI journey
Insurers can bring together their IT technical team members and HR experts to exchange their knowledge and design possible applications of company-specific HR programs to advanced AI. Objectives of these sessions include:
- IT outlines how advanced AI learn and apply that learning.
- HR identifies the programs that they use to guide employees in their jobs.
- The group proposes where these two realms intersect, and proposes initiatives to include in the advanced AI development plan.