Model governance has moved from "nice to have" to a "non-negotiable". As organizations deploy AI across industries like health care, banking and government, the demand for transparency, trust and accountability is louder than ever.

SAS experts Briana Ullman, Product Marketing Manager and Vrushali Sawant, Data Scientist, discussed what that looks like in practice at SAS Innovate 2025. Their talk epitomized the idea of trustworthy AI being about doing the right thing and building the right systems to show your work and back it up.

Here’s what else I took away:

1. Poor AI governance is a real-world safety issue, especially in health care

Ullman kicked things off with a sobering stat: The Emergency Care Research Institute (ECRI) ranked “insufficient governance of AI” as the #2 patient safety concern in its annual report. That’s ahead of many long-standing issues in health care.

She also cited the American Medical Association, which found that 60% of physicians are concerned about AI in health plans, particularly denials of care, unnecessary waste and avoidable harm. The stakes? Lives, trust and equity in care.

"Only 16% of hospital executives have system-wide governance policies for data and AI," Ullman noted. “We’ve got serious concerns, but a slow approach to preparedness.”

2. We need a “nutrition label” for AI

Ever try to pick something healthy at the grocery store? That nutrition label helps you quickly understand what you’re about to consume. Ullman argued that AI needs something similar – standard, interpretable and accessible.

“We don’t have a nutrition label for AI. And that’s a big part of the problem.”

Right now, technical and business stakeholders are often speaking different languages. Executives want to make informed, data-driven decisions. Meanwhile, everyone must do the work to translate AI into business value and prove it’s trustworthy.

That’s a heavy lift without better governance frameworks.

To bridge this gap, SAS has been developing model cards – clear, standardized documentation that acts like a nutrition label for AI models. These model cards help everyone involved understand the model’s purpose, performance, risks and governance, making responsible innovation easier and more transparent.

A model card delivers on the nutrition label analogy by giving a high-level overview of the critical information about a registered model, its intended use and the data used to train it.

3. Model governance bridges the communication gap

Governance isn’t just a checklist; it’s how we build trust between teams.

Ullman laid it out clearly: Model governance empowers technical teams to show that their models are reliable, explainable and aligned with business goals. It also gives business leaders a way to validate AI systems without becoming coders overnight.

This is about transparency, accountability and clarity, not just compliance.

4. Data scientists need governance tools that meet them where they work

Sawant picked up the session with a live demo to show how data scientists can embed governance directly into their workflows using SAS® Viya® and Python.

Using SAS Viya Workbench, she launched a Jupyter notebook to walk through a practical, reproducible example: training a decision tree model on U.S. Census data from the UCI Machine Learning Repository to predict whether an individual earns more than $50K annually. While the use case was simplified for the session, the process mirrored the kinds of challenges attendees often face when operationalizing models in regulated or high-stakes environments.

She began with standard steps familiar to most practitioners – data cleaning, exploring variable distributions and engineering features to reduce noise and improve model efficiency. But the focus quickly shifted to what happens after a model is built: embedding governance into the workflow.

Sawant showed how to register the model into SAS Model Manager, document its variables and assumptions and package it for deployment using common tools.

She emphasized that governance starts early, even during data preparation and feature engineering. Whether you’re cleaning categorical variables, reducing potential bias, or standardizing workflows, those foundational steps matter. From there, the process continues with model registration, version control and the creation of model cards – all managed through SAS Model Manager, which supports both SAS and open-source models like those built in Python.

“Responsible innovation only happens when every person involved in your AI lifecycle acts responsibly,” Sawant said.

5. Governance isn't just for regulators; it's for resilience

Too often, governance is viewed as something you do to check a box for legal or compliance reasons. But Ullman and Sawant made the case that model governance is how you scale AI safely and sustainably.

It’s what enables repeatability. It ensures that models don’t drift without detection. It keeps you ready for audits – but more importantly, for real-world consequences.

“We believe that responsible innovation begins with responsible innovators,” Ullman said. “But how do we get there? With the right frameworks, the right tools and the right mindset.”

Missed SAS Innovate or want to re-watch sessions? Watch now on demand

Share

About Author

Caslee Sims

I'm Caslee Sims, writer and editor for SAS Blogs. I gravitate toward spaces of creativity, collaboration and community. Whether it be in front of the camera, producing stories, writing them, sharing or retweeting them, I enjoy the art of storytelling. I share interests in sports, tech, music, pop culture among others.

Leave A Reply

Back to Top