Who is responsible for ensuring that new AI technologies are fair and ethical? Does that responsibility land on AI developers? On innovators? On CEOs? Or is the responsibility more widespread? At SAS, we believe that it is everyone’s duty to innovate responsibly with AI.

We believe that adhering to trustworthy AI principles can help everyone embed those principles into new technology solutions that allow customers to deliver fair, equitable, and transparent outcomes for all.

Powerful forces are driving the need for trustworthy AI: a moral imperative, the drive of conscientious consumers, and emerging regulatory mandates. Let’s take a closer look at each.

1. The moral imperative to do good

With great power comes great responsibility, and with AI, this saying has never been more accurate. AI is indeed a powerful tool that should be used for good. As an AI and analytics organization, SAS’ vision is to be the planet's most trustworthy AI and analytics partner. We believe in connecting analytics and advocacy to create something new, better, purposeful, and lasting. These beliefs and vision are why you see SAS connecting AI, analytics and advocacy to support causes such as improving education, innovating better healthcare, protecting the environment, and making our workplace more equitable.

2. Conscientious consumers want us to do good

Informed consumers are aware of the power of AI and its potential to do harm. They are intelligent and well-informed about their data rights and are becoming more careful about allowing companies to use their data. They prefer to do business with companies with a strong ethical code and a track record of doing what is right rather than companies that are only motivated by regulatory requirements and the fear of public backlash.

Read more stories about trustworthy AI

Equally important, customer expectations are changing. A 2021 study found that 69% of consumers believe brands must positively change the world. The increasingly  “conscientious” nature of consumers is why organizations should invest in Trustworthy AI and adhere to best practices when developing and implementing AI. These principles can help shape a corporate culture of responsible innovation and use. Without this culture, organizations may make mistakes as their innovation speeds without sufficient guardrails.

The more AI is used – and used at scale – the more companies need guardrails to maximize AI’s incredible potential value and avoid unintended negative consequences.

3. Regulatory mandates require us to do good

The third motivator for Trustworthy AI is regulation. Governments worldwide are attempting to balance the benefits of AI with the need to protect citizens from unfair AI-driven decisions. As a result, there is a growing number of regulatory strategies. For example:

  • The EU has proposed regulations specifically for AI. These regulations could become the flagship by which other governments set their standards. The EU AI Act incorporates a framework for classifying AI systems into multiple levels of risk and tailoring requirements for each level.
  • The UK government has proposed a light-touch approach that will regulate the use of AI rather than the technology itself. Six fundamental principles will apply to all relevant parties within the AI lifecycle and will be interpreted and applied by regulators across different markets.
  • Brazil has taken a risk-based approach to the regulation of AI. The country’s AI commission proposes the creation of a regulator with enforcement powers and rights for citizens affected by AI systems.
  • Canada’s Artificial Intelligence and Data Act (AIDA) is a proposal designed to protect citizens from the harms and biased outputs AI systems can generate and sets out national requirements for designing, developing, using and providing AI systems.
  • The U.S. Office of Science and Technology (OSTP) )has published its non-binding “Blueprint for an AI Bill of Rights,” which is designed to help protect the public from harm.

Anyone who develops technology designed to make decisions that meaningfully impact humans should bear the responsibility of ensuring transparent and equitable outcomes. As outlined above, this sentiment is woven throughout proposed regulations worldwide. It also requires navigating ethical dilemmas and tensions that cannot be fully resolved.

That is why the responsibility falls on all of us to learn the principles of responsible innovation and strive to implement them daily.

Read more stories in this series about the principles of data ethics


About Author

Kristi Boyd

Trustworthy AI Specialist

Kristi Boyd is the Trustworthy AI Specialist with SAS' Data Ethics Practice (DEP) and supports the Trustworthy AI strategy with a focus on the pre-sales, sales & consulting teams. She is passionate about responsible innovation and has an R&D background as a QA engineer and product manager. She is also a proud Duke alumna (go Blue Devils!).

Comments are closed.

Back to Top