Few words stir both excitement and fear quite like AI.
Similar to a carnival rollercoaster, some people lean into the thrill, while others clutch their safety bar. The promise of AI seems limitless, but so does the risk of going off the rails. This dynamic is playing out across government, where enthusiasm for AI is rising even as real concerns persist.
A recent study, Data and AI Impact Report: The Trust Imperative, shows government organizations worldwide are adopting AI but are in the middle of a trust dilemma. Nearly half of public sector organizations (46%) are placing strong confidence in AI systems that may not be fully trustworthy.
Some governments, especially across Europe and Latin America, are moving ahead with responsible innovation practices. Many others still struggle with gaps in data, governance and talent, making it difficult to unlock AI’s full potential.
AI now plays a bigger role in decisions that impact real people and tight budgets, such as determining who gets benefits, spotting fraud, deciding where to allocate resources and how services are provided. These decisions need to be efficient but also fair, understandable and trustworthy.
Even more, government organizations don’t have the luxury of moving fast and fixing things later. Every decision is subject to oversight, public scrutiny and long-term consequences. When AI enters the picture, the stakes rise even higher.
Why trust is the central issue – not technology
In my conversations with CIOs, CTOs and agency leaders, top concerns they share include:
- Can we clearly explain the system outcomes?
- How do we know it isn’t introducing bias or unintended harm?
- Who is accountable when an AI-supported decision is challenged?
- How do we modernize responsibly without shaking public confidence?
Public trust is one of the government’s most valuable and fragile assets. Any use of AI that undermines that trust, even unintentionally, can set agencies back years.
Responsible innovation is a mandate for leaders
For all its potential, some leaders understandably have slow-walked their AI journey. Sometimes that has meant small, targeted projects. Other times, having no better options in sight, it has meant holding onto familiar, but siloed, systems.
Government leaders are right to be cautious. According to the same report, only 15% of agencies operate at the highest level of trustworthy AI maturity, even as more than 60% expect AI investments to grow. That gap between ambition and readiness is exactly where leadership is needed most.
It doesn’t have to stay that way. There are time-tested responsible innovation strategies to carry organizations forward with confidence.
It comes down to a set of core commitments:
- Trustworthy AI starts from day one. Have clear, measurable commitments that hold your organization accountable.
- Transparency by design. Leaders and staff must be able to understand and explain how decisions are informed – not rely on black boxes.
- Human accountability. AI can support decisions, but responsibility must always rest with people.
- Continuous oversight. Models and assumptions must be monitored, tested, and adjusted as conditions change.
- Alignment with public values. Fairness, equity and consistency are foundational requirements.
When these principles guide implementation, AI becomes a tool for strengthening governance rather than weakening it.
Where AI can earn trust, one use case at a time
Public sector agencies are building confidence in AI by applying it thoughtfully and building incrementally. Many start with focused use cases where outcomes can be measured and explained, such as:
- Program integrity and fraud detection, where analytics help prioritize reviews while maintaining clear audit trails.
- Operational planning, using data to forecast demand and allocate limited resources more effectively.
- Service delivery improvements, identifying bottlenecks and backlogs without automating final eligibility decisions.
- Risk management, supporting earlier identification of issues while preserving human judgment.
In each case, success isn’t defined by automation but instead by better decisions with stronger oversight.
Building confidence inside the organization
Trust in AI matters externally. It matters just as much to the public servants expected to use these tools. Leaders who succeed invest in:
- Clear communication about how AI is used and how it isn’t.
- Training that builds data literacy, not dependency.
- Governance structures that give staff confidence in the systems supporting their work.
When employees understand and trust the tools they’re given, adoption improves and so do outcomes.
Moving forward
AI will continue to shape how government operates – that part isn’t up for debate. What is uncertain is whether it will be adopted in ways that strengthen public trust or strain it.
The issues are complex, but the answers are clear: Progress doesn’t require choosing between innovation and responsibility. It calls for leaders who champion both. After all, you can’t scale something you don’t trust.