Editor's note: this post was co-authored by Ali Dixon Ricke, Mary Osborne and Franklin Manchester
A recent study conducted by SAS and Coleman Parks shockingly revealed that 92% of insurers have set aside budget for generative AI in 2025. We’ve seen the awesome power of generative AI thanks to ChatGPT’s explosive growth in the past year. Decision makers believe generative AI can drive innovation, improve the customer experience, and deliver measurable improvements in predictive analytics.
Considerations when using generative AI technology
The industry’s initial reaction of avoidance and banning due to data security, privacy, and reputational risks has shifted. Leaders think the benefits outweigh the risks and most are running initial tests and professing that they have a good enough, if not complete understanding of the technology.
However, half of the respondents have 10% or less of their 2025 budget dedicated to governance and monitoring, while 9% have no budget allocated to it at all. Additionally, 58% describe their training as minimal, and 38% lack a GenAI policy that dictates how employees can and cannot use it. While these results are similar to findings in other sectors, with 7 out of 10 of these same decision makers using these tools at least once a week, the perfect scenario of downside risk is being created.
The 10 things insurance leaders need to know about generative AI
- 6% of insurers considering using large language models have privacy risk measures in place
- 11% have a “non-existent” governance framework for generative AI
- 8% don’t use Generative AI in professional life
- 4% have no plans to pursue using generative AI
- 19% are not considering synthetic data use cases
- 75% are concerned about privacy (see previous item)
- 3% are not prepared for regulation
- Only 8% are rethinking their enterprise data strategy to scale GenAI
- 34% see cost as an obstacle (yet 92% have set aside budget and 86% see the benefit of operational costs and time savings)
- 32% are only in the Pilot phase
Best practices for a generative AI strategy
Without a well-defined Generative AI strategy and governance framework, generative AI can be both a major privacy and operational risk. Constructing a security strategy for generative AI is still a topic that needs to be fully understood, with 75% of insurers in the study concerned over privacy risks.
It’s easy for people to get lulled into a false sense of security with LLMs. Public models are pervasive and easy to access. There’s a lot of value to be gained by experimenting with LLM bots to explore ideas and search for hidden insights. In thinking about the data to introduce to the model, it’s important to think about data quality. More data isn’t always better. Data quality for LLMs includes reducing the amount of duplication, ambiguity, and noise in the domain data.
Problems begin when people unwittingly share private or sensitive data by including it in prompts. The best-case scenario is that it’s a blip and there’s not a negative impact. The worst-case scenario happens when the public model has verbiage in their terms of service outlining the ways they use user-entered prompts as further training or fine-tuning inputs. At that point that public model becomes “contaminated” with the organization’s private or sensitive data. Once that data is in the model, it’s nearly impossible to remove it, so there’s a chance, through creative prompting, that private or sensitive data could be revealed. People who make mistakes like this aren’t bad actors. They’re your employees or your colleagues.
Moving models into production
There’s a lot of interest in generative AI, but there’s plenty of work to do before you can move to using a model in production. Generative AI use cases should be well thought out and have a narrow scope to start. It’s important for the people in your organization to be able to cut through the hype and identify a use case that makes sense. Starting small lets organizations better think through options for models as well as the curation of domain data to give the model the best chance of generating outputs that are relevant.
Deploying a Generative AI model involves more than just asking a bot a few questions. You have to think about adversarial testing—what happens if a bad actor decides to try to manipulate your model into behaving badly? You have to spend time to evaluate the results, just as you would any other type of model in your environment. Finally, ensure that your workforce is properly educated about generative AI and its acceptable uses within your organization, as educated employees reduce the risk of any AI malpractice or misuse.
Overcoming the obstacles of generative AI technology
Provide your employees the tools they need to be successful and the guidance and training on how they can safely and effectively use this technology. Our people are our greatest asset and providing them the latest and greatest technology keeps them engaged and solves problems. A recent report from Microsoft and LinkedIn found that 68% of people struggle with the pace and volume of work and 46% are burned out. Another study revealed 54% of early career employees’ decision to work for one employer versus another would be influenced by access to AI. Get your people the AI they need, they will thank you for it.
As you’re thinking through overcoming the obstacles of generative AI and the future of this technology, remember we’re all at an AI inflection point. Just like the commercialization of the world wide web in the mid 90’s - and just as ubiquitous as the internet has become in business- so too will AI. Now is the time to put in place measures to protect data, rethink your data strategy and stand-up governance.
Explore additional AI resources
LEARN MORE | Read the paper covering the study discussed in the articleLEARN MORE | The future of insurance
LEARN MORE | Read other SAS articles about AI