Traceability and trust: Top premise for ethical AI decisions


In the first part of this series, I argued that dealing with artificial intelligence (AI) and ethics is not a purely philosophical or sociopolitical issue. One thing is clear: The ethics debates will continue this year, and they will be more about the realistic possibilities and risks of AI. Companies and organisations that want to automate decisions need to be prepared to answer the question of how to make those decisions.

Many questions demand many answers

As a customer, consumer, citizen or patient I have an eminent interest in understanding why I do not get the requested credit or insurance. Or why a certain medical risk is assigned to me because of data-based diagnostics. What data was used for this? Have I given my consent to the use of my data at all? Did the decisions come about without prejudice? Can an organisation really want the decisions in question to be taken without human control? And what would such control even look like?

The relevant issues for companies are therefore governance of the decision-making process, transparency and the ability to explain the decision. All stakeholders must be able to trust algorithm-driven and data-based decisions – without completely understanding algorithms, processes or decision rules. 

How can we address this question of traceability from a process and methodological point of view? How does a company achieve the necessary transparency? What does the term "governance" cover in this context?

Governance, yes! But what exactly does that mean?

In his report The Automated Actuarial, Scott Shapiro of KPMG presents the "four pillars of trust.” His recommendations are generally valid and do not only apply to actuaries (even if they are particularly dependent on transparency and traceability). The four pillars are quality, resilience, integrity and effectiveness. Consequently, companies should pay attention to the following points.

    1. Show which data you have used in which quality. After all, all decisions are based on data. And the less control your company has over this data, the more important it is to have a clear idea of where it comes from, how it was created and how good it is. This is particularly important for the many "new" data types from the fields of telematics and IoT, but also for external data from open sources, for images or text.
    2. You should ask yourself how resilient your overall analytical process is. Do you only have a one-time lab process? Or is your analytics life cycle designed for the long term? The challenge here is to ensure governance and security along the entire analytical process chain, from data to automated decision making.
    3. Ensure the integrity of the data analysis. Document processes and the choice of your methods. Do they fit the question? Are they mathematically reasonable? Can you explain and justify the procedure?
    4. Does analytics do what it should (effectiveness)? Are the findings reliable? Is it nondiscriminatory? 

Data quality is not equal to data quality

With a powerful, unifying and consistent analytics platform, organizations can already implement these points very well. The very question of data quality can be very tackled in different ways – depending on whether it is asked in connection with analytics and machine learning or classic management reports. Looking at the overall process, a particular focus should be on consistency from the data to the decision. Too many workarounds and breaks in between different tools of the chain always lead to manual steps, shadow systems and thus hardly controllable governance.

Auditability also plays an important role here: Can I prove who made which decision for whom based on which data, which model version and which business rules? And could the data be used for this purpose? Automatic documentation, transparent comparison options for algorithms, and options for effective and agile team collaboration (keyword here is DataOps) supplement the skills required for the four pillars of trust.

Auditability also plays an important role here: Can I prove who made which decision for whom based on which data, which model version and which business rules? #AI #ethics Click To Tweet

Light into the black box

But isn't there something missing? Correct: Does the algorithm itself do what it is supposed to do? In the third and last part of my series, we take a closer look at the aspect of effectiveness, which is about how light can be brought into the "black box" of learning algorithms.

Read Next: 3 essential steps for AI ethics

About Author

Andreas Becks

Head of Customer Advisory Insurance DACH

Andreas Becks leads a team of insurance experts, data governance professionals and data scientists advising insurance clients on how to use analytics to generate value and drive transformation in a changing market. His main focus is on data-based innovation and industrialization of analytics. His expertise in artificial intelligence, and deep knowledge of business intelligence and analytics mean that he is well-placed to help insurers to reimagine their business models and drive cultural change.

Leave A Reply

Back to Top