Failure accompanies all technological breakthroughs – the wheel, manned flight, the internet. AI is no exception.
I know insurance companies are using AI – SAS research confirms it. 90% of insurers have budgeted for GenAI in 2025. I also know that only 1 in 10 insurers (11%) are fully “AI-prepared.” Other research confirms that just 26% of organizations are generating value from AI.
There is great value in using AI the right way. But like any other technological innovation, AI can be used for good or bad.
Insurers will uncover more and more uses for AI that help drive innovation and efficiency – ultimately boosting that 26%. But the need to ensure that AI is effectively governed, managed, democratized and free from bias grows along with its use.
Failure is not a bad thing
With failure comes bad outcomes, for sure. But in and of itself, the act of failing lacks intention, benevolent or malicious. The learning that comes with failure, though, is inevitably a choice.
As an example, let’s examine what we (the entire planet) learned from the Crowd Strike event.
Tripping over the cord
Munich Re places the cyber insurance market to be ~$15bn, so July 19th’s Crowd Strike event represents a potential 10-point hit to the combined ratio (with an estimated cost of $1.5bn to cyber insurers). It is the largest single-insured loss event in the history of the cyber insurance industry over the past 20 years, resulting in:
- 8.5 million Microsoft PCs offline,
- Stranded travelers from canceled flights, and
- Delayed medical procedures and hand-documenting medical information.
Ironically, this was an inside job – an organization charged with our global cyber protection pushed an update that crashed the system.
This “accident,” we’ll call it, reminds us that vigilance must be maintained even in the mundane. We learned that small errors could (and did) send the world into complete chaos. And with AI, failures can and will be amplified to their full computational power.

It’s only when we consider the infinitely scalable capabilities and raw power of artificial intelligence, that we find the significant cost of AI can be justified. We also discover the careful consideration required in choosing what AI can do, and what it shouldn’t.
The power of today’s AI
We have reached “AI exascale,” meaning the most powerful computers on planet Earth can perform calculations at 1 exaFLOPS (floating operations per second). Said differently, the machine performs “1” – with 18 zeroes behind it – calculations in just one second. It would take a human 31.7 billion years to perform the same number of calculations.
With this power, could AI failures take billions of years to erase?
Well, maybe. One AI researcher, Eliezer Yudkowsky, has gained notoriety for proposing the thought experiment, “Squiggle Maximizer.” It entails a powerful AI relentlessly chasing a pedestrian goal, like producing paperclips.
In this scenario, AI consumes so many resources that it results in humanity’s extinction. Considering AI’s power requirements, such a scenario feels more likely than a malevolent, Sky Net-type AI attacking us. Even seemingly innocuous mistakes could themselves be catastrophic.
Imagine, for example, a million policyholders receiving spammed email advertisements to bundle auto and home infinitely. This could crash email servers around the globe.
Any one of the estimated 30,000 fintech organizations could make the above scenario a reality with the click of a mouse. The sheer number of firms pedaling AI solutions truly embodies what some have referred to as the “AI Wild West” (see this fun Matthew McConaughey ad campaign).
No one sheriff will police tens of thousands of individual enterprises clamoring for your attention (and your money). This AI amplification will further distort weaknesses in underlying systems.

Consider the FTC study on the relationship between credit-based insurance scores and the impacts on predicted risk for African Americans and Hispanics. The implications open insurers to a cornucopia of scrutiny.
So, prudence is required in selecting your partner and solution – no regulator will police it for you. Alarmingly, when you consider any one of these AI providers (or pretenders), the power required to support AI boggles the mind.
It’s electric
Recently, it became apparent that OpenAI needs generational policy change and infrastructure for the US to maintain global AI leadership. Yes, Altman spent millions on the URL chat.com (a brilliant marketing move BT dubs), but the fact remains that AI costs a lot.
A single inquiry to ChatGPT requires 10 times the power of a Google search, as reported by NPR. The vast resources required are built into the cost. And the highway to the intelligence age will be paved with bad decisions: cooling data centers in the desert with clean drinking water, amounting to millions of gallons of water annually.
The narrative becomes even more bleak when we consider projections like “data centers will use 4.5% of global energy generation by 2030.” As a result, continued exploitation of fossil fuels will work against progress. And the most impacted will be already marginalized populations.
Climate risk is creating a massive calamity for insurers. But the point here is simple: AI demand will continue to drive up costs for years to come.
So, we cannot throw AI at everything. In the long run, that could do more harm than good.
A tempered approach to an enterprise-wide data and AI strategy should include the acknowledgment that certain non-AI tools still add value. And smashing a walnut with an AI sledgehammer represents wasted effort.
The bottom line: Accept failure, embrace learning
For insurers, AI use is as inevitable as the failures that accompany it. We’ve seen instances in, and adjacent to, the sector in which AI and non-AI technology has failed – spectacularly so. Consider:
- Cigna’s PxDx claims decisioning – the algorithm denied hundreds of thousands of preapproved claims in seconds.
- Lemonade’s “AI Jim” – a tool that was touted as using “non-verbal clues” to fight fraud.
- A self-driving vehicle – this car, unfortunately, drug a pedestrian who had been struck by another vehicle and thrown into its path.
Do not be afraid to fail. In fact, be afraid of those who believe they won’t.
Each of the examples above came at a great cost to the respective organization (hard dollars, brand damage, etc.). But the learning that came with the experiences: priceless.
In the PxDx example, the process and technology worked. The weak link technically was the reviewer at the end of the process and the organization’s lack of (real or perceived) support for the “last line of defense.”
And with that knowledge, AI success feels, well, inevitable.