There is a moment in every organization when you realize culture is not just "how we do things around here." It’s why we do them.

Culture manifests in how we talk to each other, how we make decisions when things aren’t obvious and the tradeoffs we’re willing to live with.

There is another force that has always been present but is more pertinent than ever: systems. Systems take what we believe and value and encode it into repeatable, automated forms.

What we don’t spend enough time thinking about is the interplay of these two things. Culture shapes systems. Systems maintain norms. Those systems then reinforce norms and, over time, can reshape culture itself.

In the age of AI, therein lies the danger. When does a system stop reflecting culture and start producing it? Who is building the systems that ultimately produce culture?

Do we risk sacrificing human judgment at the altar of automation?

When judgment wanes

We are living through a time when automated decisions are made every day with AI.

Sometimes it looks harmless, like a recommendation, a summary, a draft or a score. But when you zoom out and see those decisions happening across teams, workflows and vendors, it starts to look different. What’s becoming clear is that AI systems designed to help people handle more are starting to take on more of the decision making itself.

We can debate AI capabilities all day, but at some point, the important questions are no longer technical. What happens to culture when human judgment fades as part of the process?

My view is simple: We have to preserve human judgment if we want to preserve human culture.

Why judgment is different and irreplaceable

AI is great at optimizing. It can enhance creativity or be a thought partner.

Judgment shows up differently.

"What happens to culture when judgment is no longer central to how decisions get made?"

It’s essential in situations with no clear rights and wrongs. It helps us arrive at sensible solutions in complex circumstances. When consequential decisions must be made amid ambiguity and competing values, our judgment is irreplaceable.

That’s usually where culture actually shows itself. It’s formed through tension. It is shaped by how we weigh competing priorities, how we treat people when the answer is not obvious, and how we respond when efficiency collides with accountability.

The potential impact of AI, at scale, has thrust these concerns into the spotlight. Judgment helps us consider the consistencies we are willing to tolerate to improve productivity and efficiency. Generative AI, in particular, doesn’t give the same answer every time. That variability can be useful in small, individual use cases as you experiment, adjust and try again.

It lands differently across a business or government agency. What feels acceptable to an individual can become a problem when the consequences of decisions spread throughout an organization and beyond. At that level of risk, human judgment becomes essential. Governance is a scaled judgment.

As decisions occur within systems, we need a way to scale our judgment while preserving culture. AI governance is different from data governance and model risk management. It’s what organizations do to accelerate innovation, manage risk and ensure AI is worthy of trust.

Get a unified view of AI across your organization with SAS® AI Navigator

I like to say it is how we scale our judgment when using AI. It’s not just a technical issue, which leaders are coming to understand. They must consider the moral, operational and financial implications:

  • They need AI aligned to what they believe and value most.
  • They need the efficiency and productivity of AI without losing quality and consistency.
  • They need to capture value without creating unmanageable risk.

Those things don’t always line up neatly. That’s why I resist the framing that governance slows you down. When done well, governance is useful and convenient. It feels like part of the work rather than something layered on top. If it isn’t seen as part of the workflow, people will go around it. And at that point, it’s no longer doing its job.

AI governance is distinct from data governance and model risk management. It is what organizations do to accelerate innovation, manage risk and ensure AI is worthy of trust.

Decide what you are scaling

If judgment is what we’re trying to hold onto, the next question is practical. How do we operationalize judgment without turning it into a slogan? Leaders need to see where AI is showing up, where it runs, how much it costs, whether it is drifting outside its intended purpose and who is accountable.

Accountability matters here more than people think. You can usually tell the difference between teams that own outcomes and teams that default to the tool when something goes wrong. When the tool becomes the scapegoat, you’re seeing the erosion of judgment.

AI-driven use cases are the point of impact for AI, and where judgment and accountability must remain strong. If they fade in importance, over time, culture starts to shift.

Let me land this in a more human place. We do not live inside technologies and systems. We live inside the stories of culture. Stories shape how we understand the world and what we believe is possible.

We have to preserve human judgment if we want to preserve human culture.

We have a chance right now to tell a different story about AI. It’s a story of innovation and good judgment where governance is a standard practice rather than a vague aspiration.

We can build systems that help leaders make sound decisions quickly and move on with the day. We can do it while navigating the moral, operational and financial tensions that create competing priorities, such as reputation, efficiency and cost.

Trustworthy AI will reshape our workplaces and society. We must determine how we will intentionally and thoughtfully shape AI's approach to this, in a way that protects what we value.

Explore more: How to govern your AI strategy in a highly regulated world

Share

About Author

Reggie Townsend

Vice President of AI Ethics, Governance and Social Impact

Reggie Townsend leads SAS’ global AI Ethics, Governance and Social Impact organization, overseeing the company’s Data & AI Ethics Practice; AI & Society initiatives; AI Governance Advisory; Standards, Regulations & Risk Intelligence programs; and SAS’ Accessible & Adaptive AI efforts. He drives SAS’ commitment to trustworthy, human‑centric innovation across products, policies and partnerships. A nationally recognized voice in AI governance, Reggie serves on the board of EqualAI and has advised the White House as a member of the National AI Advisory Committee. His career includes leadership roles in cloud, consulting and managed services for health care and life sciences. A frequent global speaker and advisor, Reggie has counseled government leaders worldwide, appeared on major stages and been published in JAMA. A proud Chicagoan, Reggie is an engineering, business and leadership alum of Southern Illinois University, Illinois Institute of Technology and the University of Chicago.

Leave A Reply