The vital ingredients of responsible AI

0

In my previous article, “The Business Imperative for Responsible AI," I covered the main business drivers for responsible AI. Beyond the greater good and social responsibility, responsible AI is emerging as a key factor for successful AI adoption.

In this article, I will describe the main ingredients of responsible AI: guiding principles, a governance framework and technical capabilities.

Before we dive in, I will start with a somewhat provocative statement by saying that there is no such thing as “responsible AI,” in the same way that there is no such thing as a responsible calculator. When I mistakenly declare less revenue than I should in my tax return, can we blame the calculator that I used to do it? Can we say that it didn’t behave ethically? I wish we could, but unfortunately, the taxman will not agree. It was my use of the calculator that was to blame, not the calculator itself.

This is why I think we should talk about the responsible use of AI rather than responsible AI. Now, to be fair, the difference between this calculator and a typical AI system is that with the calculator, I was the one providing explicit instructions by entering numbers and operations. I didn’t ask the calculator to learn from previous calculations and to predict a result.

On this preliminary note, let’s look at what it means to develop, deploy and use an AI system responsibly.

Guiding principles for responsible AI

There are many versions of responsible AI principles, and in my opinion, they are all more or less equivalent. Here is a summary.

Human-centred

An AI system should provide meaningful interactions with users. For instance, leveraging natural language to make sense of the AI system even by nonspecialists. It should also incorporate human oversight “in the loop,” especially for high-stakes decisions that can have a significant impact on individuals, society or the environment. There should be formal processes in place to evaluate the risks, review the uses of AI systems, monitor the impacts over time and escalate appeals for individual decisions.

Accountable

Corporate responsibility requires accountability. Organizations achieve this as part of a governance framework that establishes clear roles and responsibilities in the end-to-end process from data to decisions, clarifying the rules and providing the necessary training to all involved in this process.

Fair & impartial

The AI system should not discriminate between categories of people. There should be proactive monitoring of bias in the data we use to train models, in the output of the model scoring against different groups of people and in the fairness of the decisions made with these models.

While we can find bias in all data sets, we can also detect and mitigate it. The data we use to train models should represent the population we are applying it to.

Diverse backgrounds and skills of the teams developing AI will ensure that AI systems are not only effective but also fair to everyone.

Transparent & explainable

Transparency is a key element of the trust that you should be able to place in AI systems. There are simple questions that you should be able to answer, such as:

  • What data did you use to train the model?
  • How well do you know the inner workings, attributes and correlations of the model? Have you documented them?
  • What variable positively or negatively influenced a specific prediction?
  • What rules/logic did you use along with the prediction to drive a specific decision?

Robust & reliable

The AI system should produce consistent and reliable outputs. You should automate analytical model deployment to avoid error-prone manual activities. Proactively monitor the models in production not only for accuracy but also for fairness. And there should be some processes in place to retrain, rebuild or retire a model when you need to.

Safe & secure

You need to protect the AI system from potential risks, including cyber threats, that may cause physical or digital harm.

Compliant

The AI system should be compliant with key regulations, such as the EU's GDPR (General Data Protection Regulation). This would allow individuals to opt in or to opt out of sharing their data. It also means organizations will not use personal data beyond its intended and stated use, being able to “explain” individual decisions, etc.

Ethical

And, finally, the AI system should be ethical. Ethics can have different meanings for different people or in different countries. So what I mean by ethical AI is the ability to comply with a specific code of ethics, which depends on the values of the organization in charge of the AI system and the country and industry it is used in. It typically includes human rights, social well-being and sustainability.

A Governance Framework for Responsible AI

Mission objective

All responsible AI principles require some level of governance. The goal is to be able to clarify how the principles should be interpreted and translated into practical guidelines and to ensure that safeguards are in place at every stage of the analytics life cycle.

As part of day-to-day business operations, the governance framework should orchestrate the collaboration between different teams and monitor the outcomes and impacts of AI systems.

Standards and oversight

The first task is to translate generic, high-level principles into practical guidelines. There is some level of interpretation required to specify the rules and requirements that AI systems should comply within the specific context of the industry and country in which the organization operates and its corporate values.

Without trying to be exhaustive, let’s look at some examples of the standards that a governance framework should aim to define:

  • The method and criteria to evaluate the risks of a specific use case and, based on the risk level, what approvals will be needed and what will be the implications for the development, deployment and monitoring of the associated models.
  • A classification of the data to be used for analysis, between prohibited, protected and low-risk variables and how they should be handled during the development and deployment process. For instance, you should not use prohibited variables as part of model training, but they need to be available to test a model for bias. Protected variables might be used in specific contexts but not in others, and you might need to anonymize or mask them.
  • Which biases are deemed acceptable or unacceptable, acknowledging the fact that machine learning models generate bias by nature.
  • The levels of transparency, explainability and documentation that are required based on the risk assessment, as well as specifying the techniques to be used for interpretability.

There should be some formal processes and checklists to:

  • Monitor and mitigate bias in the data used to train models.
  • Monitor and remediate bias in the predictions made by the models.
  • Test the validity and compliance of AI systems before deployment.
  • Monitor the performance, accuracy and fairness of models over time.

Orchestration

There are two different orchestration requirements for an AI governance framework.

The first one is the need to orchestrate the various teams involved in the development and deployment of an AI system: business analysts, data scientists, data engineers and IT, but also legal, risk and compliance. These teams have specific backgrounds and stakes and generally do not have a history of effective collaboration. It is the role of the governance framework to specify the nature and modalities of this collaboration, establish clear roles and responsibilities, and provide adequate training to all the people involved.

The second requirement concerns the orchestration of human and algorithmic insight to drive decisions. We know that algorithms are great for analyzing vast amounts of data and providing answers to specific questions. But they are also very narrow, can’t generalize, and lack the context, culture and common sense that humans acquired with experience. When the most common use of AI technologies is to help organizations to drive automated or semiautomated decisions, humans should always be part of the equation.

Some decisions, such as the placement of a promotional offer on a webpage based on the visitor’s profile, don’t require a high degree of human oversight. Although it might be useful to monitor the accuracy and effectiveness of those decisions on a regular basis. Other decisions, such as the approval of a credit request, might need human arbitration in some cases. On the other side of the spectrum, high-impact decisions – for instance, in justice, health care or social areas – will ultimately require a human being. In these cases, algorithmic scoring is simply an additional input into the decision-making.

A proper risk assessment will help to determine to what degree human beings should be in or outside the loop (aka, the decisioning process).

Organizational model

It is very clear that the evolution of technology has outpaced society and allowed to go unchecked for some time. In addition, the perspective of gaining a competitive advantage before others means that ethical considerations have often been handled as an afterthought and a luxury. For those reasons, there is still very little experience in the way AI should be governed and the appropriate governance structure that should be established.

But things are evolving fast. The question of AI governance is increasingly important for organizations that want to realize the benefits of AI investments. They know that responsible AI is not only the right thing to do but also a fundamental condition for the success of AI initiatives.

We can highlight a few takeaways from early adopters:

  • It is far more effective to embed new governance requirements as part of existing governance structures and processes, rather than create something from scratch. It is much easier to extend the mission of a governance board than to mobilize a new set of stakeholders with a new mandate.
  • To the inevitable question of whether you should hire a data/AI ethicist, I would say why not? But it is unlikely that you can find one who has all the qualities and experience that such a role entails. In fact, you might be better served by a multidisciplinary team of experts in charge of establishing the standards and processes, even if they’re not dedicated resources.
  • Because establishing an AI practice that is both productive and responsible requires many changes in the ways of working at every level and more collaboration between teams and siloes, there should be a clear executive mandate and a sponsor to drive those changes.

Technical capabilities

Unfortunately, to address the challenges of responsible AI, it is not enough to simply buy a piece of software. But there are technical capabilities that you will need to support your efforts.

Data privacy & quality

As part of the data preparation phase, there should be tools available to automatically profile data sets to measure and rectify data quality and identify and flag sensitive variables. For example, regulations such as the GDPR protect personal data, but also variables (and their proxies), that could lead to unacceptable bias on the basis of age, gender, ethnicity, location, etc.

Additional capabilities include the ability to mask or anonymize data, and the use of synthetic data that can be handy to protect data privacy while retaining the underlying structure and statistical distribution of the original data.

Bias detection and remediation

We know that bias is inevitable. But we also know we can detect, monitor and remediate it.

It starts with the data used to train machine learning models. There should be proactive profiling of the data set to evaluate representations from a diverse population, exploring data distributions to identify anomalies and potential bias.

There should be also an evaluation of the model’s scoring outcome, the ensure that predictions are not only accurate but also comparable across different groups and different sensitive attributes of the target population.

This should be automated for ongoing monitoring, and surfaced through bias reports, or fairness dashboards, in terms that are understandable even by non-specialists.

Explainability & transparency

As part of the data science toolkit, there should be out-of-the-box interpretability capabilities. This helps you explain the output of a model and decisions to auditors and customers.

You should be able to explain how a specific variable affects model prediction. And you need to explain why a model makes a specific prediction, for example when a customer is denied a loan. A decision like this is generally the result of a logical decision flow including business rules and predictive models. Therefore, there is a need for some level of decision audibility: the ability to explain, in human language, the logic of the decision-making, the underlying variables that contributed to certain predictions, and how decisions impact different groups based on sensitive variables.

Model governance & monitoring

An AI governance framework should have the ability to centrally manage and govern all the models you deploy and develop in a production environment. An enterprise model repository should provide metadata, versioning, and audit trails.

The system should automatically create reports on model parameters, drivers and performance statistics. Producing these reports in natural language helps everyone understand the impact of decisions. Model performance statistics and interpretability reports should help to identify bias and shortcomings

Traceability is also an important capability, for instance, the ability to establish the lineage between data to decisions. We need to be able to identify:

  • The data is used to train a model.
  • The data is used to make a prediction.
  • Which model makes a decision.
  • The impact of the decision.

Ongoing monitoring of the models in production should surface any drift in performance or changes in the way that input variables affect predictive scores over time. When unacceptable thresholds are reached, the system should generate alerts.

Wrapping up

While it’s difficult to argue the validity of responsible AI principles, putting them into practice is far more complex. You must translate them into practical guidelines and put a formal governance framework in place. This is the basis for rules, standards and safeguards you need to oversee the development, deployment and usage of analytical models. This governance framework should incorporate an organizational model with clear roles and responsibilities, formal processes to orchestrate the analytic lifecycle, and technical capabilities to support those efforts.

If you would like to discover more on this topic, watch the webinar, Accelerate Innovation With Responsible AI, by registering here.

Share

About Author

Olivier Penel

Advisory Business Solutions Manager

With a long-lasting (and quite obsessive) passion for data, Olivier Penel strives to help organizations make the most of data, comply with data-driven regulations, fuel innovation with analytics, and create value from their most valuable asset: data. As a global leader at SAS for everything data management and privacy-related, Penel enjoys providing strategic guidance, and sharing best practices and experiences in using data governance and analytics as a catalyst for digital transformation.

Leave A Reply

Back to Top