Traceability and trust: Top premise for ethical AI decisions

0

In the first part of this series, I argued that dealing with artificial intelligence (AI) and ethics is not a purely philosophical or sociopolitical issue. One thing is clear: The ethics debates will continue this year, and they will be more about the realistic possibilities and risks of AI. Companies and organisations that want to automate decisions need to be prepared to answer the question of how to make those decisions.

Many questions demand many answers

As a customer, consumer, citizen or patient I have an eminent interest in understanding why I do not get the requested credit or insurance. Or why a certain medical risk is assigned to me because of data-based diagnostics. What data was used for this? Have I given my consent to the use of my data at all? Did the decisions come about without prejudice? Can an organisation really want the decisions in question to be taken without human control? And what would such control even look like?

The relevant issues for companies are therefore governance of the decision-making process, transparency and the ability to explain the decision. All stakeholders must be able to trust algorithm-driven and data-based decisions – without completely understanding algorithms, processes or decision rules. 

How can we address this question of traceability from a process and methodological point of view? How does a company achieve the necessary transparency? What does the term "governance" cover in this context?

Governance, yes! But what exactly does that mean?

In his report The Automated Actuarial, Scott Shapiro of KPMG presents the "four pillars of trust.” His recommendations are generally valid and do not only apply to actuaries (even if they are particularly dependent on transparency and traceability). The four pillars are quality, resilience, integrity and effectiveness. Consequently, companies should pay attention to the following points.

    1. Show which data you have used in which quality. After all, all decisions are based on data. And the less control your company has over this data, the more important it is to have a clear idea of where it comes from, how it was created and how good it is. This is particularly important for the many "new" data types from the fields of telematics and IoT, but also for external data from open sources, for images or text.
    2. You should ask yourself how resilient your overall analytical process is. Do you only have a one-time lab process? Or is your analytics life cycle designed for the long term? The challenge here is to ensure governance and security along the entire analytical process chain, from data to automated decision making.
    3. Ensure the integrity of the data analysis. Document processes and the choice of your methods. Do they fit the question? Are they mathematically reasonable? Can you explain and justify the procedure?
    4. Does analytics do what it should (effectiveness)? Are the findings reliable? Is it nondiscriminatory? 

Data quality is not equal to data quality

With a powerful, unifying and consistent analytics platform, organizations can already implement these points very well. The very question of data quality can be very tackled in different ways – depending on whether it is asked in connection with analytics and machine learning or classic management reports. Looking at the overall process, a particular focus should be on consistency from the data to the decision. Too many workarounds and breaks in between different tools of the chain always lead to manual steps, shadow systems and thus hardly controllable governance.

Auditability also plays an important role here: Can I prove who made which decision for whom based on which data, which model version and which business rules? And could the data be used for this purpose? Automatic documentation, transparent comparison options for algorithms, and options for effective and agile team collaboration (keyword here is DataOps) supplement the skills required for the four pillars of trust.

Auditability also plays an important role here: Can I prove who made which decision for whom based on which data, which model version and which business rules? #AI #ethics Click To Tweet

Light into the black box

But isn't there something missing? Correct: Does the algorithm itself do what it is supposed to do? In the third and last part of my series, we take a closer look at the aspect of effectiveness, which is about how light can be brought into the "black box" of learning algorithms.

Read Next: 3 essential steps for AI ethics
Tags AI ethics
Share

About Author

Andreas Becks

Head of Pre-Sales Insurance DACH

Together with his team of insurance experts, data governance professionals and data scientists, Dr. Andreas Becks advises insurance clients on the analytical platform of SAS. His main focus is on data-based innovation on the one hand and industrialization of analytics on the other. For 20 years Andreas has been designing innovative solutions for data-based decisions, information visualization and AI applications in various industries. He has been with SAS for more than 5 years in various expert and management positions for Customer Experience, BI and Analytics. Moreover, Andreas is speaker at events, blogger and author of specialist articles. Prior to SAS, he held various senior positions in research and development, as a business and solution architect and in the strategic product management of a software company. Andreas owns a degree as computer scientist, holds a PhD in Artificial Intelligence from Aachen Technical University as well as an MBA from the University of St. Gallen.

Leave A Reply

Back to Top