If you scroll through job postings right now, you’ll see a pattern.

Plenty of roles asking people to train models, fine-tune outputs, build agents and automate workflows. Fewer ones are asking for the kind of judgment that used to sit at the center of how decisions get made.

At the same time, companies are moving faster to implement AI, sometimes ahead of a real plan for how it fits. Teams get leaner. Decisions get pushed into systems. Work speeds up.

What’s less clear is what happens around it.

And that tension came into focus during the opening session at SAS Innovate 2026.

Jenn Chase speaking at SAS Innovate 2026

Setting the stage

SAS CMO Jenn Chase opened SAS Innovate 2026 by connecting the company’s 50th anniversary to something larger than longevity: trust. From fraud alerts to personalized retail experiences to analytics used in health care and drug development, Chase framed SAS as part of everyday decisions people may never see, but still rely on. “That’s not a technology story. That’s a human one,” she said.

The real question is about people

SAS CTO Bryan Harris gets to this early in his keynote, asking: “Will people matter?”

And then he pushes it a step further.

“We are in a crisis…a crisis of confidence in human ingenuity.” And that’s something people are already feeling, even if they’re not saying it directly.

What hasn’t settled is what happens as AI becomes part of everything else.

Work is changing. Decisions that used to take time now happen instantly. Or they happen inside systems that don’t really pause. There’s more data than people and organizations can realistically handle, and that gap keeps growing.

The crisis Harris alludes to sounds big, but it doesn’t feel abstract. And it isn’t about choosing between people and profit. It never really has been. You can invest in people and still build something that lasts.

Examples like scaling digital twins in manufacturing with Georgia-Pacific or simulating environments in health care sound AI-first – but they’re not replacing people. They’re bringing people into testing, simulation and decision-making earlier, before anything happens for real.

Bryan Harris

We are in a crisis…a crisis of confidence in human ingenuity. Bryan Harris

So, what is the future that we’re using AI to build toward? A future where AI replaces parts of the process without much thought? Or one where it extends what people can do?

And if AI keeps scaling, what exactly are we scaling with it?

AI shows up because the problem is bigger than people can handle alone

The volume of data isn’t slowing down. It’s accelerating, producing more signals and complexity, and forcing decisions to be made faster.

But once AI starts doing more than just helping us, as when agentic AI starts to act inside workflows, our expectations should change.

That’s where Harris explains why it’s so essential to make sure your AI is worthy of trust. Because at scale, mistakes don’t always show up right away; they compound. It’s not enough for something to work once. It must hold up again and again.

And if the process behind the decision isn’t solid, you get a bad outcome and a system that keeps producing them.

Culture, systems and judgment are now tied together

Reggie Townsend, SAS VP of AI Ethics, Governance and Social Impact, then takes the conversation and shifts it in a way that feels more grounded.

He starts with two things that don’t usually get discussed together: culture and systems.

Culture is what people recognize. It’s how decisions get made when things aren’t obvious. When there’s no clean answer and someone has to weigh competing priorities and decide what matters more. But his point is that they’re not separate.

Systems take what an organization believes – like its values, assumptions and way of making decisions – and turn it into something repeatable.

Reggie Townsend

Townsend explains that once decisions are made within the system, they both reflect and shape culture.

“AI systems designed to scale human capacity are gradually displacing human judgment.”

You can see how that happens. A recommendation gets accepted without much pushback. A generated answer replaces a deeper look. A score carries more weight than a conversation.

It all adds up. And once it does, we need to wonder whether the judgment that used to sit behind those decisions is still part of it.

“We have to preserve human judgment if we want to preserve culture.”

If judgment isn’t carried into the system, something else takes its place, and it gets repeated at scale. The system will still run and produce outcomes. But over time, those outcomes start to reflect whatever was built into it, whether or not that aligns with what the organization believes.

If culture lives in decisions and systems are making more of those decisions, then systems are shaping culture too.

Townsend introduces SAS AI Navigator with Senior Trustworthy AI Specialist Kristi Boyd

That’s why governance starts to look different in this context. Townsend points to SAS® AI Navigator as part of that shift – a governance tool designed to give organizations visibility into how AI is being used, where it’s showing up and how decisions are being made across use cases.

This tool represents a move toward making responsibility something that’s built into how systems operate, not something added after the fact. The goal is to make the path of least resistance also the responsible path.

“We should apply good judgment for the culture.”

None of this works without people who understand the work

By the time Jared Peterson, SAS VP of Global Engineering, steps in, the conversation has already covered a lot of ground – human ingenuity, trust, judgment, governance – and it could’ve easily stayed at that level.

But he doesn’t let it.

Instead, Peterson pulls everything back to the people responsible for making this work in practice – the developers, data scientists, analysts and engineers who have to take all of this and turn it into something that holds up outside of a controlled environment.

“You are the engines behind human ingenuity,” Peterson said.

That line lands differently in the context of what came before it. If Harris is asking whether people will matter, and Townsend is asking what happens to judgment inside systems, Peterson is showing what that looks like when someone sits down to build something.

He walks through a fraud example that makes the point clear.

In the example, the model itself wasn’t broken. It was doing exactly what the data told it to do. The model became very good at recognizing what was common, rather than identifying what mattered. Left alone, it would have continued producing results that looked accurate on paper but missed the real objective.

The fix wasn’t to overhaul the model or chase something more complex. It was to change the inputs using synthetic data to rebalance the problem and give the model something meaningful to learn from.

Jared Peterson

You are the engines behind human ingenuity.Jared Peterson

That detail does more work than it looks like. Not just in the model itself, but in how well someone understands the problem they’re trying to solve, the data they’re working with, and the decisions that sit on the other side of it.

Models on their own aren’t enough. A fraud score doesn’t mean much unless it’s tied to policy, risk tolerance and action. That’s where bringing models, rules and context together starts to matter.

Even when he talks about copilots and agentic workflows, the same idea holds. The tools are getting more capable, but they still depend on someone knowing what to ask, what to trust and when to step in.

That’s the connection back to what Harris and Townsend were getting at.

If AI is scaling decision-making, and systems are shaping how those decisions are made, then the people building and maintaining those systems become even more important – not less.

Because they decide what gets encoded, repeated and trusted at scale.

Peterson sums it up in a way that cuts through everything else:

“The technology… It’s just the instrument. It only comes to life if someone knows how to play.”

AI can accelerate the work. It can expand what’s possible. But it still depends on people who understand the problem well enough to shape its application.

Without that, the system still runs. It just doesn’t necessarily get you where you think it will.

The real takeaway is what organizations choose to scale with AI

Taken together, the opening session of SAS Innovate 2026 points to one underlying question: if AI is scaling, what’s scaling with it?

Across the session, that question shows up in different ways – people, judgment, culture and the work it takes to make any of it real.

Speed and efficiency will scale. But so can judgment, context and accountability – if organizations plan and build with that in mind.

Because whatever gets built in doesn’t stay contained. It runs, it repeats and over time shapes how decisions get made. And eventually, that starts to look a lot like the organization’s culture.

Which brings it back to the question that opened the session: will people matter?

That answer won’t come from technology. It will come from the organizations that choose to scale with AI and how they do so.

Stay up to date with all things SAS Innovate 2026

Share

About Author

Caslee Sims

I'm Caslee Sims, writer and editor for SAS Blogs. I gravitate toward spaces of creativity, collaboration and community. Whether it be in front of the camera, producing stories, writing them, sharing or retweeting them, I enjoy the art of storytelling. I share interests in sports, tech, music, pop culture among others.

Leave A Reply