When AI projects stall or fail, most leaders assume the culprit is flawed models or immature technology.
But research shows the real barrier is trust.
The new Data and AI Impact Report uncovered what we call the trust dilemma: the gap between how AI is perceived and how responsibly it’s implemented. The larger the gap, the lower the ROI.
Today, fewer than one in four organizations have a centralized team overseeing AI governance, ethics, fairness, data quality, monitoring and bias detection to ensure AI is implemented responsibly. In other words, most companies are deploying AI without the safeguards to make it truly trustworthy.
And this isn’t just an ethical problem; it’s a financial one. Without explainability, governance and ethics, confidence in AI is misplaced, and risks multiply. With them, trust becomes a strategic advantage, accelerating adoption, enabling innovation and turning AI into a reliable driver of impact.
Here are three important ways leaders can work to solve the trust dilemma in AI.
1. Align AI with clear, strategic goals
AI delivers real value when it’s tied to clear, strategic goals.
Many companies start by using AI to boost employee productivity while keeping budgets tight. It's a reasonable first step, but it’s also a sign the technology is still in its early stages. In fact, 57% of organizations using AI for less than two years report personal productivity as their top priority, with cost savings ranking third.
The companies that generate real ROI aren’t chasing quick wins, they’re thinking bigger. Mature AI organizations, those using AI for eight or more years, prioritize process efficiency and decision-making, with cost savings dropping to seventh on the list.
They use AI to enhance customer experiences, expand market share and strengthen business resilience. Organizations focused on these strategic goals consistently report higher returns, whereas those focused primarily on cutting costs see the lowest ROI.
Maturity shows up in impact, not convenience. Leaders who align AI with clear, enterprise-wide objectives turn experimentation into transformation, unlocking measurable value across the business rather than just incremental gains.
2. Focus on integrity over interaction
Generative AI (GenAI) can feel inherently trustworthy, but perception doesn’t equal reliability. Users report trusting GenAI twice as much as traditional machine learning, even though significant explainability gaps remain. That trust often stems from human-like interactivity, perceived usefulness and the sense of control users feel.
The reality is more complicated. Even among those who trust GenAI, 62% worry about data privacy, 57% about transparency and explainability, and 56% about ethical use.
To build true trust, organizations must build a robust data foundation that supports AI innovation. Clean, centralized data, combined with governance and ethical oversight ensures AI outputs are accurate, auditable and aligned with organizational standards.
Human-like interactivity may impress, but only AI grounded in trustworthy foundations delivers lasting business impact. In the end, integrity, not interaction, is what turns AI from a novelty into a dependable asset.
3. Prepare for the future
Too many leaders treat AI like it’s just another tool to improve productivity here or streamline a workflow there. That’s like trying to earn a Michelin star by swapping out a single dish.
A Michelin star, one of the most prestigious honors in dining, isn’t about one standout plate; it’s about the entire experience, including the quality of the ingredients, mastery of techniques, consistency, creativity and the service that brings it all together. AI works the same way. To win with AI, leaders need to implement it end to end, rethinking how data flows, how teams operate and how customers are served.
With emerging technologies like agentic AI and quantum computing on the horizon, using AI in isolated ways will leave ROI on the table. Enterprise agentic AI is expected to transform operations by enabling autonomous decisions, from enhancing customer experiences to flagging fraud. And quantum AI promises computational power to tackle complex problems faster.
But without a solid data foundation, even the smartest AI will fall short.
Get the data right, and AI stops being a side dish. It becomes the recipe for transformation.
More than a differentiator – a multiplier
The future of AI will not be defined by those who adopt the flashiest tools, but by those who earn the deepest trust.
Executives who prioritize responsible AI practices, align innovation with strategic outcomes, and invest in resilient data foundations will lead organizations that scale AI and sustain its impact.
In this era, trust is not just a differentiator; it is the multiplier of return on investment.
1 Comment
Great article, Uvo. Trustwortthy AI represents my research area and this article is very valuable !