Do we need responsible AI before we will trust autonomous systems?

0

The strong growth of artificial intelligence (AI) and smart algorithms mean that technology — in robotics for example — is increasingly able to learn independently of any human intervention. It can therefore also operate more autonomously. Self-learning AI, however, raises plenty of questions and certainly demands some effective preconditions. It requires, in fact, responsible use, and that will take thought and care.

Do we need responsible AI before we will trust autonomous systems?The idea of ‘ responsible AI ’ was the starting point for a discussion that I led together with Colin Nugteren of Notilyze during RoboCafé, with the theme Robots meet data science. RoboCafé is an initiative run by RoboValley. It aims to bring together scientists working on robotics and other related areas with entrepreneurs and decision-makers from the business world and the public sector who are working in data science, data analytics and bots.

Practical applicability of AI

Many people still have little confidence in AI and particularly in automated decision-making. For example, if AI is used to evaluate requests for credit checks based on text or image recognition, how can applicants be assured that their application has been assessed fairly and impartially? How can we be sure that the AI system is not responding to errors in the data or in the automated assessment method, or that the algorithm itself has been correctly trained? These factors can and must be addressed through the use of ‘responsible’ AI, and this must be done to avoid stunting or reducing further development and innovation.

If data scientists, end users and regulators want to be comfortable with the latest AI and robotics, then statistical models and decisions must be as reliable as possible. Data scientists can now make very accurate models. As a general rule, they also want to understand how these models work so that they can provide suitable information for particular target groups. For example, end-users want to know the basis for any decisions and particularly whether the decisions are reliable. Regulators want to protect end-users as much as possible by requiring that decisions and models are fair and transparent.

Beyond the AI hype, the question is how data-driven innovation can be brought to life. What steps are needed to move AI out of the lab and into business operations? Join SAS experts at an event near you and be inspired to lead your team in the new age of analytics.

AI by design

The challenge is therefore to establish the best way to innovate, but also to be able to develop solutions and products that are both reliable and transparent.

During the discussion at RoboCafe, it became clear that the overwhelming view was that the environment used for the development of AI technology should be, above all, as reliable as possible. This means that AI technology must be ‘by design’ and must guarantee privacy. The technology should also be easy to handle, because this will encourage experimentation, but at the same time ensure that AI can be introduced and used in practice relatively easily.

#AI technology must be reliable - ‘by design’ and must guarantee privacy. How can this be achieved? @MarkB4kker explains... Click To Tweet

In my view, the best — if not perhaps the only — way to achieve this is through the use of a reliable and flexible analytics platform. A good platform, which supports the analytics lifecycle from data management right through to model deployment, offers a reliable and transparent way to develop and use AI. Transparency is assured through the single, shared and very visual environment, encouraging collaboration and cooperation between groups and individuals. This also speeds up time to value, which is a significant factor for any business.

A platform also supports responsible AI by providing a secure environment. Many platforms — and certainly the best — can be deployed either in the cloud, or on premises. On premises controls security, allowing organizations to protect both data and intellectual property more effectively. As a side benefit, it may also help to control costs, a significant issue for any organization, but particularly smaller ones with more limited budgets. Administration interfaces and scripts should support efficient management and automation of tasks.

Responsible AI requires organizations to take responsibility

I left the discussion at RoboCafé having drawn one very clear conclusion. If every AI system used transparent data management, transparent models and clear analytics implementation methods, we would very definitely be able to improve our lives through AI. There are systems available that will allow this, including advanced analytics platforms. Developing responsible AI, however, requires the organizations involved to take responsibility for doing so.

Realising value from AI, especially in a way that is responsible and ethical, is challenging. There is no denying this. It would, however, be much more irresponsible to discard all the possible benefits from AI, including medical, societal and economic, just because the task was difficult. We have a responsibility to rise to the challenge.

Next: I recommend you read a blog series of AI interpretability written by my colleague Ilknur Kaynar Kabul.

 

 

Share

About Author

Mark Bakker

Data Strategist in the field of data science and analytics

Advising customers, prospects and alliances in the field of Data Science and Data Engineering.

Leave A Reply

Back to Top