The business imperative for responsible AI


With the steep rise of artificial intelligence (AI) adoption across all facets of society, ethics is proving to be the new frontier of technology. Public awareness, press scrutiny and upcoming regulations are forcing organizations and the data science community to consider the ethical implications of using AI.

The need for responsible AI has never been more pressing. Gartner identified “Smarter, Responsible and Scalable AI” as the No. 1 market trend for data and analytics in 2021. In April 2021, the European Commission published a proposed regulation for AI. And it is not just another instance of EU regulatory frenzy. Many governments across the world are developing similar regulatory frameworks for AI.

We should be asking questions about:

  • Fitness of data we use to train machine learning models.
  • The risk of amplifying human bias and discrimination.
  • Explainability of predictions and decisions made with algorithms.
  • The role of human oversight in monitoring the fairness and transparency of AI applications.

Beyond the greater good and social responsibility, what is at stake is the successful adoption of AI technologies that have, otherwise, the potential to deliver tremendous value to organizations and society at large.

With awareness and maturity rising, we are no longer asking what we can do with AI technologies. Instead, we are asking how we can use AI effectively, at scale and responsibly, to achieve tangible business outcomes.

Three forces are converging: acceleration, delegation and amplification.


The exponential increase of data, storage and computing capacities makes acceleration possible. And the COVID-19 crisis has only fuelled this in the last 18 months. A recent survey from McKinsey shows that responses to COVID-19 sped up the adoption of digital technologies by several years and that many of those changes could be here for the long haul. It’s not just about supporting remote working. Organizations have realized that they need to rely on their analytics and digital services to build resilience, competitiveness and the ability to survive in uncertain times.

Acceleration is also driven by the “tyranny of instantaneity,” that is, consumer demand for instant access to online services. Just like children, we want everything, and we want it now! We expect that products, services, information – everything – should be available online and immediately. At the end of the day, who wants to go back to facing the rain and the crowd to shop in high-street retailers? Who wants to go back to the time when, to get a loan, you had to visit a bank branch, speak to a human being, fill in a few forms and wait three weeks (if you were lucky) to get a response?

The ability to drive real-time automated decisions with AI is becoming a competitive differentiator. If I can’t get my loan approved online immediately, I will simply go to another bank. If the product I want to buy cannot be delivered tomorrow, I’ll just buy it from another retailer.


The second aspect that strikes me is how much of our decision making we are prepared to delegate to algorithmic robots.

For instance, it is estimated that today, 85% of all stock market trading is done automatically with algorithms, with no intervention from human beings. This is convenient and lucrative for traders, investment banks and hedge funds. But we know as a matter of fact that it can exacerbate stock market crashes with real-life consequences for the life and wealth of individuals. This raises the question of accountability and explainability of the algorithms used for automatic trading.

Stock trading might be an extreme example. But there are so many decisions that we now expect computers to make for us, including:

  • The information we get in search engines.
  • Directions to travel from A to B.
  • Operation of traffic lights, planes and nuclear reactors.
  • Protection of our bank accounts against fraudsters.

We seem to have blind faith in the capacity of algorithms to make the right decisions. And in a way, algorithms are much better than humans at certain things, like processing vast amounts of data to identify patterns and make predictions. But they lack common sense, culture and context. They just learn from the data we feed them. They might be very effective, but they are also very narrow. This can lead to AI systems making predictions or decisions that are technically correct but socially unacceptable.

We see AI applications extending, and sometimes replacing, humans to make decisions on our behalf, raising questions such as:

  • Can we trust them?
  • What standards and codes of ethics do AIs use to make those decisions?
  • Who’s responsible for the decisions AIs make?
  • What are the impacts of those decisions?
  • Why shouldn’t algorithms share similar codes of ethics, standards and face the consequence of bad decisions like humans? (Which for an algorithm would result in being retrained, rebuilt or retired.)


Amplification refers to how much AI applications can achieve and how many decisions they can make in the blink of an eye. We know that algorithms make bad predictions sometimes, and those bad predictions lead to making the wrong decisions, with tangible impacts in the real world.

It’s one thing for an individual to make a bad or unfair decision. But algorithms make decisions at scale and in real time. So the consequences of bad decisions get much bigger very quickly. The reach of AI systems is millions of times greater. In effect, AI is acting as an amplifier or an echo chamber. Therefore, the biases that exist in our brains, conscious or unconscious, and the discriminations that exist in the real world, are being amplified by AI applications.


Those concerns are perfectly justified, but what doesn’t help is the overwhelming media coverage that failing AI systems get in the press. It’s all over the news. Not a single day goes by without one of those high-profile cases making the headlines. The killer robot fantasy is very good material for journalists, but it’s fuelling fears in the general public and among business leaders.

To some extent, this is understandable. Public awareness and regulators are catching up with the technical innovations that have gone unchecked for some time. Some people have decided to take the problem into their own hands and to do something about it. There are countless initiatives and groups determined to tackle the issue, raising awareness about it, developing guidelines and best practices, and challenging legislators.

One-sided coverage

I recently watched a documentary called “Coded Bias,” which sheds light on the discrimination and lack of transparency in certain uses of AI technologies, such as facial recognition. The documentary provides a quite enlightening perspective into the ethical issues surrounding the use of AI. But it completely misses the counterargument that AI is also good for society.

Let’s not forget the huge benefits that AI can and does deliver to individuals, society and the environment. AI is already, for example:

  • Helping to protect the Amazon rainforest.
  • Supporting endangered species.
  • Providing more insight to help doctors make the right diagnosis earlier.
  • Tackling issues like child abuse, homelessness and mental health care.

The problem is that highly publicized negative cases overshadow all this goodness. One story that you probably heard about concerns the Tay chatbot released by Microsoft on Twitter in 2016. In just 16 hours, the chatbot went from saying “Humans are super cool” to releasing racist, sexually charged and all kinds of very offensive comments.

The fact is that Tay was learning from people on Twitter who interacted with it. And they clearly had a lot of fun throwing horrible things at it. The issue was perfectly summed up when Tay responded to someone who said “You are a stupid machine” by replying “Well, I learn from the best, if you don’t understand that, let me spell it out for you: I learn from you and you are dumb too." This is just a funny example of AI gone wrong, but it was also a great learning experience for data scientists at Microsoft and beyond about the need for oversight in the data fed to machine learning algorithms and the question of accountability for the output of AI applications.

A barrier to innovation?

Those highly publicized cases have the positive effect of raising awareness about how things can go wrong. But they also raise fears with decision makers. Those concerns are even slowing down the adoption and rollout of AI technologies. According to a recent study from Deloitte, 95% of executives surveyed said they were concerned about ethical risks in AI adoption. And they had concerns about damaging their brand and reducing the trust that customers, partners and employees have placed in it, to the point that 56% of them were slowing down their AI adoption.

And still, AI is the key to accelerating digital transformation and developing market differentiation and competitive advantage. It’s a race, and success is primarily measured through power and speed, which means that, often, ethical considerations are coming as an afterthought and seen as a regulatory or liability issue.

The business imperative

It is time we strike a balance between these two seemingly opposing worlds. On one side, we have ethics and the greater good. On the other side, innovation and profits.

This tension between ethics and business is all but superficial. Responsible AI is actually good for business. Responsible AI is about trust, and it’s emerging as a business imperative, a key success factor for digital transformation.

There are various reasons why organizations care about responsible AI. In most cases, it’s about building the trust users need to adopt AI applications, and therefore to secure the value they expected from those applications. I recently worked with an organization where data scientists developed a predictive model to help the sales team to focus its efforts on the sales opportunities with the highest likelihood of success. The goal was, ultimately, to increase revenue by smarter use of resources. However, because of the lack of transparency in the inner working of the model and, therefore, the lack of trust in the predictions it makes, most sellers tended to ignore it and simply use their instincts to prioritize their work. As a result, the AI system that was designed as a black box was never adopted by the business, and the value was never realized.

Standing out from the competition

Organizations also use responsible AI as a way to differentiate themselves from their competitors. In the same way that coffee brands advertise fair trade business practices, responsible AI is becoming something of a marketing label for shedding a positive light on the brand. Because most consumers are keen to do the right thing, they will tend to better trust and do business with companies that can demonstrate their ethical values and corporate responsibility. Some are even talking about responsible AI as the “new green,” a mandatory attribute of every corporate PR strategy.

Risk mitigation is also an important driver for responsible AI. This includes the risk of negative public exposure when the use of an AI application led to discrimination. And also the risk of customers angry about how their data is used, why their loan application was declined, or why they were given different quotes for their insurance policy based on their location or the device used.

Finally, it’s about compliance. Compliance with the stakeholders’ demand for ethical business practices, and compliance with existing and future regulations. This is going to become a priority for organizations to prepare themselves and avoid hefty fines.

Wrapping up

Overall, the planets have aligned for ethics to catch up with the technology, to make the distinction between what we can do and what we should do. For business leaders, ignoring the ethical risks of adopting AI is not an option anymore. Mitigating those risks doesn’t have to be a barrier to innovation. And, if done right, it also has the potential to boost business resilience and competitiveness.

In my next article, I will cover the ingredients of responsible AI: guiding principles, governance framework and capabilities.

If you would like to discover more on this topic, watch the webinar, Accelerate Innovation With Responsible AI, by registering here.


Tags AI ethics

About Author

Olivier Penel

Advisory Business Solutions Manager

With a long-lasting (and quite obsessive) passion for data, Olivier Penel strives to help organizations make the most of data, comply with data-driven regulations, fuel innovation with analytics, and create value from their most valuable asset: data. As a global leader at SAS for everything data management and privacy-related, Penel enjoys providing strategic guidance, and sharing best practices and experiences in using data governance and analytics as a catalyst for digital transformation.

Leave A Reply

Back to Top