If you’ve been experimenting with tools inside ChatGPT or Claude, you’ve probably seen how quickly an AI agent can surface useful information. It can pull data, summarize what’s happening and point you in the right direction. That part works.
Where things still break down is what happens next.
You find the issue, but you must leave the conversation to act on it. You open another system, recreate the context, decide what to do and carry it forward manually. The agent helped, but it didn’t move the work along.
That gap showed up clearly in a session at SAS Innovate 2026, led by SAS’ Rik de Ruiter, a Sr. Systems Architect. The setup itself wasn’t unfamiliar: a large language model (LLM) calling tools, retrieving data and returning structured responses. But the way the interaction continued after the first answer is what made it useful.
SAS Innovate: Explore insights, then dive into keynotes, announcements and breakthroughs on demand.What changes when the system doesn’t reset after each question
De Ruiter's demo focused on a predictive maintenance scenario – a network of flood sensors powered by batteries that continuously report voltage levels and expected lifespans. When de Ruiter asked for sensor data and predictions, the system returned exactly what you would expect – real-time readings paired with model output.
The difference came after the issue was identified.
Instead of stopping at “this sensor will fail soon,” the next step happened in the same flow. A follow-up request created a service ticket using the sensor ID that had already been surfaced. There was no need to restate the problem or switch to a different system to address it. The context carried forward.
Then the interaction shifted again. A simple “what-if” question – raising the temperature to 15 C – triggered a re-run of the model against current data. The prediction changed and the timeline for failure tightened. That kind of adjustment didn’t require de Ruiter to export data or run a separate simulation. It happened in place, using the same context.
For a user, that changes the experience from checking a system to working inside it.
Why this still feels familiar and where it usually falls short
None of this requires a completely new set of capabilities. Tool use, API calls and structured outputs are already widely available. The reason most setups don’t feel like this in practice comes down to a few recurring gaps.
- Many agents are still connected to data that updates on a schedule rather than in real time. By the time the model responds, the answer is already slightly out of date, making it harder to trust for operational purposes.
- Tool definitions are often thin or inconsistent. When a model doesn’t have clear descriptions of what a tool does or when it's used, it either guesses or fails to use it at all. That’s where interactions start to feel unreliable.
- Even when an AI agent surfaces the right insight, the workflow often ends there. The user is left to translate that insight into action somewhere else, which breaks the flow and adds friction back into the process.
What made this work
The setup shown addressed those gaps in ways that are easy to overlook but make a real difference for users.
First, the data feeding the system was continuously updated through event stream processing. That meant every question was answered against the current state of the system, not a static snapshot. When conditions changed, the model’s output changed with it.
Second, the tools exposed to the model were described in a way that made them usable. Each one had a clear purpose, defined inputs and meaningful variable descriptions. That gave the model enough context to choose the right tool and use it correctly without relying on rigid scripting.
Third, the system was connected to something that could take action. Creating a ticket wasn’t treated as a separate workflow. It was simply the next step in the same interaction, using the context that was already established.
Underneath all of this, the model context protocol (MCP) provided a consistent way to expose those tools, models and data streams. You don’t see MCP directly as a user, but you feel the difference when the system doesn’t break as you move from one step to the next.
What this means if you’re trying to use or build this
If you’re evaluating or putting together agent-based workflows, the takeaway isn’t to focus on whether the model can call a tool. That problem has already been solved.
What matters more is how the system behaves after the first useful answer.
If the data isn’t up to date, users will second-guess what they’re seeing. If the tools aren’t clearly defined, the interaction becomes inconsistent. If there’s no path to action, the AI agent becomes a helper rather than something you rely on.
The setups that hold up are those where data, models and actions are tightly connected so the user doesn’t have to think about their boundaries. They can ask a question, follow up on what they see and act on it without starting over each time.
The takeaway for everyone
AI agents are getting very good at finding issues. That’s no longer the hard part.
The whole process moves ahead when the same system can carry that context forward, stay aligned with what’s happening right now and move the work to the next step without forcing the user to leave the interaction.
That’s what turns a useful answer into something you can actually use.