Here's what separates responsible innovators from everyone else: they ask uncomfortable questions before they build, not after.

At the SAS Hackathon, five teams proved this in practice. Working under real-world constraints and problem statements, they didn’t start with frameworks or compliance checklists. They started with three simple questions:

  • For what purpose?
  • To what end?
  • For whom might this fail?

That ethical inquiry shaped every decision they made.

For what purpose?

The first question is deceptively simple: Why are we building this at all?

Consortix began here when designing their anti-money laundering (AML) unified rule orchestration and risk agent (AURORA) as part of the hackathon.

Their focus extended beyond efficiency. The goal was to support human judgment while improving fairness and accountability. That clarity influenced the system’s design choices.

Explainability, governance, and human review were built in from the start. Those elements weren’t optional features; they were necessary to ensure decisions could be understood, reviewed and corrected when needed.

REAiHL Lab faced the same clarity question in their health care solution for the SAS Hackathon. In exploring AI-powered ambient scribes, they focused on three key outcomes: patient safety, clinician protection and institutional accountability.

That emphasis informed how the system was evaluated and monitored. Ethical monitoring became part of the workflow. Validation included diverse patient groups. Oversight wasn’t treated as friction but as a foundation.

Butterfly Data asked their purpose question while working on their travel disruption forecasting solution. They could have said "optimize rail networks." Instead, they defined their goal as creating a resilient, user-centric urban transport system. That framing made their solution passenger-centric, with safety and trust being non-negotiable. As a result, they used only open-source, non-personal data.

To what end?

This question defines boundaries.

NLPioneers had to answer this when processing sensitive survey data in their SAS Hackathon project. The technical capability to analyze more data existed, but the team focused on appropriate use.

They implemented automated detection and masking of personally identifiable information before analysis began. Privacy protections were enforced at the pipeline level, not added later.

Super SAIJAN faced a similar boundary question when they took on the task of modernizing Jakarta, Indonesia's public services. Technology can be compelling. CCTV analysis, algorithmic decisions, everything automated. But they asked, 'To what end?' What are we willing to compromise? The answer was clear: not fairness. Not transparency. Uncertain cases got flagged for human review. Because sometimes the most important decision is knowing when to keep humans in the loop.

Ethical inquiry here functioned as decision-making discipline: clear limits, applied consistently.

For whom might this fail?

No system works equally well for all users. This question surfaces potential harm early.

Consortix recognized that their system could fail by unfairly flagging customers if bias checks were ignored or governance was not in place. During the SAS Hackathon, they built continuous validation into their solution. Not just as a nice-to-have, but as a safeguard.

REAiHL Lab understood that no monitoring system catches everything. KPIs might miss subtle bias affecting underrepresented groups. They designed continuous feedback loops with diverse patient groups. They built in accountability for the gaps they haven't seen yet.

The NLPioneers team asked who their system might exclude. Rare identifiers that PII masking might miss? Underrepresented linguistic groups, where their models might be less accurate? They addressed this by making the model pipeline transparent and auditable, where every rule is reviewable by humans who can catch what algorithms may miss.

Each team recognized that every solution has blind spots and the responsible move is to acknowledge them rather than pretending they don't exist.

Why this matters right now

Ethical inquiry changes how systems are built.

These five SAS Hackathon teams used it to think more clearly about purpose, limits, and risk. That clarity showed up in the outcomes.

  • Consortix moved faster because they eliminated the risk of unfair flagging later.
  • REAiHL Lab built smarter because continuous monitoring caught drift early.
  • Butterfly Data built with confidence because they eliminated privacy risk entirely.
  • NLPioneers owned their limitations instead of being surprised by them. Super
  • SAIJAN likely earned public trust from day one because its solution was designed for human judgment.

This is what AI built right looks like.

Want to get involved or learn about SAS Hackathon projects? Visit the SAS Hacker's Hub

Share

About Author

Vrushali Sawant

Data Scientist, Data Ethics Practice

Vrushali Sawant is a data scientist with SAS' Data Ethics Practice (DEP), steering the practical implementation of fairness and trustworthy principles into the SAS platform. She regularly writes and speaks about practical strategies for implementing trustworthy AI systems. With a background in analytical consulting, data management and data visualization she has been helping customers make data driven decisions for a decade. She holds a Masters in Data Science and Masters in Business Administration Degree.

Comments are closed.