A few years ago, writing emails was a breeze. For someone like me, who loves to write, I didn’t use much effort to craft the right response. It was a part of my daily rhythm.

Fast-forward to now and even a simple email gives me pause. I’m often fighting the desire to put whatever I wrote into Microsoft CoPilot or ChatGPT – just to check.

And I’m not the only one.

A study conducted by Microsoft and Carnegie Mellon University found that frequent use of generative AI in the workplace may lead to a troubling side effect: the erosion of our cognitive abilities. Among 319 knowledge workers surveyed, those who relied most on AI reported lower levels of critical thinking engagement. As the researchers put it, higher trust in AI’s capabilities correlated with “less critical thinking effort.”

As any marathon runner will tell you, you don’t just show up on race day. You train. You build endurance, one mile at a time. Our muscles weaken without exercise, and the same applies to the muscles of our mind. A shift in cognitive engagement can result in a decline in our ability to think critically and solve problems independently.

And this is likely not the intended outcome. When we aim for corporate efficiency and productivity, we don’t plan to create mental entropy or cognitive regression. These are the unintended impacts.

But they are real.

And they might catch us off guard if we’re not actively thinking about the downstream effects of our use of AI systems. Intellectually, we understand this idea: that you have to train to stay sharp. Yet somehow, we’re still surprised when skipping the “mental reps” results in predictable degradation.

This brings us to the larger question: What does it take – for me, for you and for society at large – to truly trust AI without harming our trust in ourselves?

Governments and institutions, like the READDI Institute, are using AI for life-saving work: developing antiviral drugs before the next pandemic hits. READDI, for instance, uses SAS® Viya® to power its drug discovery pipeline – rapidly modeling and testing compounds for global deployment.

But what makes their approach notable isn’t just the speed or scale – it’s the intentionality. READDI knows the risk of building solutions that fail entire populations. That’s why they use trustworthy AI practices to account for fairness, ensure inclusivity and maintain model accountability. They're not just optimizing outcomes – they're optimizing responsibly.

That mindset is rare but can be infectious.

Trust gap vs. usage boom

Since the release of ChatGPT, the uptake of generative AI has been staggering. Across industries, AI is being used to write code, draft documents, analyze policies, simulate disasters, monitor public safety threats and even design infrastructure.

But while AI’s impact is increasing, public trust is not.

According to the 2024 Edelman Trust Barometer, more people reject AI (35%) than accept it (30%). The remaining 35% hover between uncertainty and cautious optimism.

The reasons for the distrust are multiple and diverse. However, the dichotomy between individuals over-relying on AI and those rejecting AI is likely present in most organizations. This discrepancy poses a critical question: how do we bridge the trust gap while acknowledging the real risks?

Principles over productivity

SAS has grounded its AI approach not just in innovation, but in intentional governance.

Our goal is simple, but not easy: to be the most trustworthy AI and analytics partner on the planet.

That means asking hard questions, like:

  • “Could we do this?” becomes “Should we?”
  • “Can we automate this interaction?” becomes “What values are we communicating if we do?”

Take a birthday card for your mother. You could use ChatGPT to write it. But should you? Are you sacrificing something irreplaceably human in the process? What if it’s a card for a coworker you don’t really know?

That’s why we advocate for a principle-driven approach to AI – one that balances business goals with human-centered values. Our framework includes:

  • Human-centricity: Technology should serve people, not the other way around.
  • Inclusivity: AI should reflect diverse populations and work for everyone, not just the majority.
  • Accountability: Recognize potential harms before they happen and act to prevent them.
  • Transparency: Clear documentation of how AI systems are built, used, and monitored.
  • Robustness and privacy: Strong systems that protect both performance and personal data.

To embed these principles, we operationalize what we call The QUAD governance model:

  • Oversight by a cross-functional executive committee.
  • Compliance with evolving global regulations.
  • Culture driven by training and education in ethical AI.
  • Operations that treat trust as a market value – not just a moral one.

Trust in practice: Model cards and more

It’s not just talk. We’ve built features in SAS Viya like model cards – a kind of AI “nutrition label” that provides transparency into model performance, fairness, drift and intended use. It even identifies responsible parties and privacy risks. This makes it easier for both technical and non-technical users to understand what the AI is doing – and what it’s not doing.

Capabilities like synthetic data generation, causal inference and fairness assessment aren’t just technical bells and whistles. They’re trust enablers.

The maturity mindset

I used to be an obsessive runner – half marathons and marathons. I would run before work and on the weekends. And then life happened and I stopped being intentional about running and stopped running consistently. It just wasn’t a priority anymore and I didn’t make time for it.

A few months ago, I decided to get back into running and I realized that what once used to be my warm-up before my workout is now the whole workout. My running muscles have weakened, and I now have to spend time training them again.

That’s what happens when you stop being intentional. Whether it’s running or using AI responsibly. If you don’t put in that effort, you lose your edge.

If we want to create a future where AI is both powerful and principled, we need to train differently. We need governance that reflects our values, policies that protect people and systems that promote confidence, not just compliance.

Most of all, we need to remember that trust isn’t an input – it’s an outcome. It’s not what you install – it’s what you earn.

If you found this insightful, watch this webinar: 

Fostering Trustworthy AI Using a Model Card

Share

About Author

Kristi Boyd

Trustworthy AI Specialist

Kristi Boyd is the Trustworthy AI Specialist with SAS' Data Ethics Practice (DEP) and supports the Trustworthy AI strategy with a focus on the pre-sales, sales & consulting teams. She is passionate about responsible innovation and has an R&D background as a QA engineer and product manager. She is also a proud Duke alumna (go Blue Devils!).

Leave A Reply

Back to Top