AI has been largely reactive for years: following human commands, assisting with tasks and providing insights based on predefined rules. But 2025 is shaping up to be the year of agentic AI, where AI agents exist to not just respond to human input but act independently.
In the movie I, Robot, there's a pivotal moment when Will Smith's character, Detective Del Spooner, discovers that a robot, Sonny, made a grave decision on its own – breaking the first principle of the Three Laws of Robotics: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Sonny's choice to allow a human to die to save Spooner challenges the idea of robot autonomy.
This unsettling movie scene, which depicts a somewhat dystopian view of AI, raises key questions no longer confined to just science fiction movies and literature:
- How much autonomy should these AI agents have?
- Who is accountable when an AI agent makes the wrong call?
- Where do we draw the line between helpful automation and risky decision-making?
Another point for the AI revolution
With the rise of AI agents, machines capable of making independent decisions are a reality. Powered by agentic AI, AI agents can plan, make decisions and execute complex tasks with minimal oversight.
“AI agents are autonomous systems designed to perform tasks on your behalf, with minimal to no human interventions,” said Marinela Profi, Global AI and GenAI Marketing Strategy Lead, in an interview with Wired. “Their goal is to make AI systems more useful. So, AI agents are a flex on our behalf, not just to give us ideas on how to plan a vacation. They can range from simple programs like chatbots that handle customer queries to autonomous financial trading systems that make decisions in real-time.”
And these aren’t just general-purpose bots responding to commands – they’re specialists. As Profi put it, "I would characterize them as role-based agents, so agents that are specialized in a specific industry. For example, an agent that books the venue, creates the invite list, sends out the emails and can make decisions based on different scenarios. These are the agents where we deliberately give them autonomy in decision-making.”
We’re already seeing agentic AI in action across industries. You might’ve even encountered an AI agent at work across plenty of everyday examples:
- Banking: Have you ever blocked your debit card while on vacation because of "suspicious activity"? An AI agent made that decision. It flagged the unusual transaction, assessed the risk and automatically froze your account, sometimes before you even noticed the issue.
- Health care: Hospitals can use AI agents to decide which patients need immediate attention. These agents analyze symptoms, medical history and hospital capacity to direct patients to the correct department, reducing wait times and improving efficiency.
- Public sector: Have you ever received a traffic ticket in the mail from a red-light camera? That’s an AI agent at work. It detected your car, identified your license plate and issued a citation without a human reviewing every case.
These are just a few examples of how AI agents already make decisions that affect people’s lives. But as these systems become more advanced, their role in high-stakes decision-making is growing – along with concerns about fairness, accountability and bias.
Dishing on the ethics of agentic AI
Whenever we discuss emerging technology, it’s important to start with its ethics. Reggie Townsend, Vice President of the Data Ethics Practice, says that “responsible innovation begins with responsible innovators,” and I agree. Where do we draw the line between enabling these systems to function effectively and ensuring they don’t go too far without human oversight?
As agentic AI moves forward, it’s more crucial than ever that AI agents are created with a strong ethical foundation. It's not enough for the AI agents to work; they must be developed ethically and held accountable when they don’t.
The ethical questions surrounding these AI agents are profound. We’re not just talking about automating tasks or improving efficiency. We’re talking about AI systems that could make decisions that impact people's lives. But for many in the space, there is excitement about its potential, accounting for the ethics, too.
“I'm really excited about the growing emphasis on agentic AI and ethical AI. I'm seeing these two topics together on purpose,” Profi said. “On one side, AI agents, I believe, will take center stage in 2025. This may be the most powerful intersection of AI and humans."
Accountability: Who’s responsible when AI gets it wrong?
The question of accountability looms large with agentic AI. When an AI agent makes a decision that leads to harm, who’s responsible? Is it the developer who created the AI, the organization that deployed it, or the human who relied on it? As agentic AI becomes more widespread, this issue needs clear guidelines.
A concrete example comes from banking. AI agents are increasingly used to approve or deny loans and these systems often rely on historical data to make decisions. However, if the training data includes biased patterns from past decisions, the AI might unfairly deny loans to certain groups. In this case, who’s to blame? The AI agent? The financial institution using the AI? The developers who trained the model?
“The more autonomous AI agents become, the greater the stakes of poor decision-making," Profi said.
The reality is that when AI starts making decisions on its own, it becomes harder to pinpoint responsibility when things go wrong. The more autonomous AI becomes, the more we need to establish frameworks for accountability, especially in high-stakes areas like health care and law enforcement.
To that point, Profi sees “robust orchestration frameworks and decisioning solutions” created to “ensure fairness, transparency and accountability.”
Navigating bias and fairness in autonomous systems
Another ethical dilemma regarding agentic AI is ensuring these systems are fair. AI agents, by design, are only as good as the data they’re trained on. If the data is biased, whether intentionally or unintentionally, the AI system will likely reinforce those biases.
This might mean AI lending systems could deny loans to minority groups based on historical lending patterns. In health care, an AI that’s trained on data primarily from one demographic might overlook conditions that disproportionately affect other groups.
Related: Why decision intelligence matters more in the age of AI agents
This raises more thoughts: Can AI be truly neutral, or will it always reflect the biases inherent in the data on which it is trained? It’s critical that these systems be transparent and their decisions explainable so that those affected understand how those decisions were made.
Transparency also helps identify and correct biases, which is key to making sure that AI agents don’t unintentionally harm marginalized communities.
The role of humans: Should AI be left to its own devices?
The need for human oversight in AI decision-making is a crucial piece of the puzzle. While agentic AI can be highly efficient and autonomous, there are areas where human intervention should remain the safety net. This is where human-in-the-loop decision-making becomes important.
While AI handles routine tasks or provides recommendations, humans stay in the loop to step in when needed – whether reviewing a medical diagnosis or making a final decision in a legal case.
In cases where the stakes are high, like health care or the criminal justice system, human oversight is essential to ensure that AI doesn’t make life-altering decisions without ethical guidance or accountability.
So, how autonomous should AI agents be? It’s a delicate balance. Everyone might not agree on how far they should go.
On the one hand, organizations want customers to use AI to perform tasks efficiently, reduce human error and take on roles that may be too complex or time-consuming for people.
On the other hand, it isn’t just about how much autonomy AI should have or how helpful it would be but how we can ensure these systems align with human values, fairness and transparency.
As AI continues to evolve, we must stay vigilant in shaping how it is deployed, ensuring that human oversight remains a part of the equation. Only then can we truly embrace the power of agentic AI without compromising the values that make us human.
WANT MORE GREAT INSIGHTS MONTHLY? | SUBSCRIBE TO THE SAS INSIGHTS NEWSLETTER