If it’s about building better systems, we must ask: Better for whom?

In parts one and two of this series, I’ve looked at AI engineering as the discipline that integrates technical practices and orchestrates the -Ops landscape. And I’ve addressed end-to-end accountability, shared governance, and adaptive resilience. But there's a dimension that hasn’t been examined: the people on the receiving end of these systems.

This dimension matters much more than the architecture.

Systems shape lives

AI systems don't stay in notebooks and staging environments. They eventually make decisions about who gets hired, who qualifies for a loan, who receives medical treatment, and who gets flagged by law enforcement. They determine what information people see, how they understand the world, and what opportunities appear available to them.

This isn't about individual decisions, but the cumulative effects of those decisions.

When a recommendation engine repeatedly promotes specific kinds of content, it doesn’t merely mirror existing preferences; it actively influences and molds them. When a search algorithm prioritizes some sources and not others, it doesn't just organize information – it (re)defines what counts as authoritative. And when a job-matching system consistently shows opportunities to some demographics but not others, it doesn't just facilitate hiring; it reinforces existing patterns of inequality.

Feedback loops are real. Systems trained on historical data perpetuate historical biases. Platforms optimized for engagement amplify divisive content. Models that "learn" from user behavior can end up teaching users to conform to expected patterns. We build systems that mirror the world as it is, then act surprised when they fail to create the world as it should be.

Unfortunately, this isn't a thought experiment. It's live.

There is no technical-ethical divide

It’s painfully obvious (gestures wildly) that far too many meetings have someone that says something like, "Let's get the technical work done first, then we'll address any ethical issues." As if ethics is something you apply at the end, like a coat of paint… or a marketing campaign.

This framing is backwards and dangerous.

Ethical considerations are technical considerations. They're inseparable from the engineering decisions being made every day. Consider fairness in a loan approval model. You can't retrofit fairness after you've chosen your training data, selected your features, and optimized your objective function. Those upstream decisions encode assumptions about what fairness means and who bears the cost when the model gets it wrong. By the time you're "ready" to think about fairness, the consequential decisions have already been baked in.

The same is true for transparency, privacy, accountability and safety. These aren't post-deployment concerns. They're design requirements that shape everything from data collection to model selection to monitoring strategies.

What AI engineering demands

So, what does it mean to take ethical imperatives seriously? A few things stand out to me.

Stakeholder inclusion from the start

People affected by AI systems need a voice in how those systems are built. Not as an afterthought. Not as "user acceptance testing." But as active participants in defining requirements, evaluating tradeoffs, and determining what success looks like. 

This means navigating disagreements about priorities. It also means building systems that serve the people they're meant to help, rather than just the organizations deploying them.

Adversarial thinking as standard practice

Every AI system will be used in ways you didn't anticipate, by people with different incentives than you assumed, and in contexts you didn't plan for. AI engineering means asking, "How could this go wrong?" not as pessimism but as practice.

Red teaming, stress testing, and scenario planning aren't nice-to-haves. They're how you avoid preventable harms. The goal isn't to imagine every possible failure mode. That's impossible. The goal is to build systems with enough resilience that when unexpected problems surface, they don't cascade.

Transparency by default

If you can't explain how your system works to someone affected by its decisions, you're not done building it. This doesn't mean every user needs to understand neural network architectures. It means providing meaningful explanations at the right level of abstraction.

And if your system is too complex to explain in any meaningful way? Well…that's a design problem, not a communication problem.

Continuous accountability

AI systems change over time. Data drifts. Edge cases emerge. What worked in testing fails in production. AI engineering means building the infrastructure to detect when things go wrong and the processes to respond when they do.

Not just technical monitoring, but also outcome monitoring. Are predictions fair? Are decisions defensible? Are the results what we intended? This requires a commitment to ongoing evaluation and the willingness to pause or pull systems that aren't performing as promised.

Responsibility that you can't outsource

What about: "We have a compliance team for this." Or "That's the legal department's job." Or a personal favorite: "The algorithm is just optimizing what we told it to optimize." Unrestrained growth or optimization without accountability leads down dangerous paths.

[As an aside, have you played Universal Paperclips? It’s a thought experiment on a classic scenario: an AI (the player), given a simple goal (make paperclips), pursues and optimizes production so relentlessly that all available matter in the universe is converted into paperclips, destroying humanity and everything else in the process. It’s quite fun to play, though… your mileage may vary.]

These deflections miss the point. Compliance with existing regulations is the floor, not the ceiling. And "we're just following orders" has never been a compelling defense – for algorithms or the people who build them.

AI engineers make consequential decisions every day. What data to collect. What signals to amplify. What errors to tolerate. What risks to accept. These decisions shape people's lives. That's not something you can delegate to a different department or hide behind an optimization function.

I’m way ahead of you: No, this doesn't mean every AI engineer needs a philosophy degree. (Some light reading wouldn’t hurt…) But it does mean we're responsible for understanding, and being honest about, what our systems do and who bears the cost when they fail.

What this looks like in practice

Ethically grounded AI engineering looks like:

  • A health care AI team that includes patients and clinicians in model evaluation, not just data scientists.
  • A lending model that measures fairness across multiple definitions, not just the one that's easiest to optimize.
  • A content recommendation system with circuit breakers for harmful amplification, not just engagement metrics.
  • Documentation that doesn't just describe what the model does, but who it affects and how. It looks like incident response plans that center on the people harmed, not just system uptime.

It looks like organizations are willing to say, "We're not ready to deploy this," and mean it.

The integration challenge

Here's where AI engineering as a discipline becomes critical. Ethical AI isn't a separate track. Rather, it's woven through every phase of the life cycle. Data governance decisions affect fairness outcomes. Model monitoring affects accountability. Deployment strategies affect transparency.

This isn't easy. Sometimes this means slower initial progress. It means uncomfortable conversations about tradeoffs. It means being willing to say "this approach won't work" when the approach treats ethics as checkbox compliance. But the alternative – building powerful systems without grappling with their implications – isn't engineering. It's recklessness, on a schedule, with a deployment pipeline. 

Looking ahead

We've established what AI engineering is, how it integrates technical disciplines, and why ethical considerations are inseparable from technical ones. But understanding the imperative is different from meeting it.

What does it take to do AI engineering well? What skills do teams need? What organizational structures support this kind of work? What cultural shifts make the difference between systems that work and systems that work for people?

In part four, we'll turn to the people and organizational dimensions of AI engineering. Because you can't build better systems without building better teams and better ways of working, to support them.

Explore how organizations are building responsible AI systems. Read the report.

Share

About Author

Colby Hoke

Senior PMM, SAS Model Manager and Open Source Integration

Hey, I'm Colby. I've spent the last couple of decades in open source, helping turn complex tech into clear, compelling stories. These days, I'm a Sr. Product Marketing Manager at SAS, where I help make sense of things like Viya, AI model deployment, decision intelligence, and open integration—without all the buzzword ick. I believe good marketing should feel more like a conversation than a pitch deck. I've got a soft spot for trail running, 3D printing, em dashes, and raising identical twins…which is basically a masterclass in chaos management. If you're into tech that works, stories that stick, or just want to discuss the joys of pickle juice and gummy bears 30 miles into a run—let's talk.

Leave A Reply