According to the Forbes and SAS panel of experts, the biggest technological revolution in humankind’s history is happening right now. To be a part of it, companies need to build their own sets of ethical artificial intelligence principles. Tackling the ethical issues is the key to successfully harnessing the incredible potential of AI.
A lot of good has already been done for society with the help of artificial intelligence. Measures to curb the coronavirus pandemic based on AI data analysis are just one example. However, there are big ethical issues with AI as well. The competitive pressure on companies to deliver personalised customer experiences at low cost is immense. And at the same time, vigilant consumers are demanding guarantees from companies for reliable and transparent AI operations. How should companies and organisations act in this situation?
These issues were discussed at a recent web conference organized by Forbes in collaboration with SAS. Journalist Mark Barton chaired the event, hosting keynote speakers Bernard Marr, international best-selling author, futurist, and strategic business and technology advisor, and Ieva Martinkenaite, Vice President of Analytics and AI and board member at Telenor. A virtual roundtable discussion included experts on AI and its implementation:
- Laetitia Cailleteau, Accenture’s Managing Director and data and AI lead for Europe.
- Kalliope Spyridaki, Chief Privacy Strategist for legal compliance and public policy with SAS EMEA.
- Iain Brown, PhD, Head of Data Science, adjunct professor and author from SAS UKI.
The discussion provided a comprehensive picture of the field of ethical AI. A recording of the conference can be viewed here: Artificial Intelligence and the Ethics Mandate. Some appetizers:
The good, the bad and the neutral
Bernard Marr started by saying that machine learning and AI are giving us tremendous tools to transform every aspect of our lives and the way we organize our society. In his opinion, AI is the most powerful technology humans have ever had access to. But with this power comes huge responsibility. Machine vision, natural language processing, smart automation – they all have much to give to us, but there is always the flip side as well.
One of Marr’s examples concerned AI natural language processing. Amazon`s virtual assistant Alexa is useful to many, and some people even say “good night” to it. AI can also write very good content – many of Forbes’ analyst reports are automated using machine learning. The other side of the coin is the fake news that can be distributed to challenge democracy. Deep fake videos can be so good that they are difficult to differentiate from the real thing by a human. AI can do it, though.
According to Marr, self-learning AI can do things that we humans could not even think of. Even creativity, often thought to be the last resort for humans, is not out of AI’s reach anymore. “How does this make us feel? Do we feel inferior to machines as they outsmart us?” he asked.
Telenor`s Ieva Martinkenaite shared her conviction that AI technologies as such have no moral entitlements – they are neutral. “The development, the deployment and the use – or more precisely, overuse, misuse or underuse of machine learning – carry the ethical problems. Unethical use of AI is about underusing AI when it could be used for replacing tedious work and overusing it to monitor and survey citizens,” she stressed.
Several ethical issues are still unsolved
Marr listed some fears and concerns that we humans may have about AI. We might ask ourselves whether we will lose our jobs to AI.
“Some people may lose their jobs, and the jobs of many people will change. Every job will be augmented by AI, even the jobs of doctors and lawyers. At Shell, for example, every contract is already drafted by AI. Still, most studies show that AI will create more jobs than it will take, but the transition will be difficult,” he said.
Another common worry is privacy. There are comprehensive regulations regarding AI privacy in some parts of the world, but according to Marr, there is still at least one challenge even here: We suppose that the people using AI have full capacity to make the necessary choices. “What about children playing with AI toys? Do they know what they are doing?” he asked.
Bias and responsibility
Biases are a big issue in AI: gender biases, age biases, cultural biases. All affect outcomes when they are present in the data that AI is using. But, as Marr pointed out, carefully built AI can also help to promote diversity and help in reducing bias.
Responsibility and accountability are ethical concerns. Marr pointed out that these problems cannot be solved simply by putting a human in the loop to make the decisions. AI is too fast for that. Think about autonomous cars. Their complex systems react in a split second to avoid an accident. Who is responsible if the car harms a person to protect its passengers? This is an unresolved question.
“And who owns the intellectual property created by AI? Algorithms can compose music, create amazing poetry and deep fake paintings. An AI-generated painting was recently sold by Christie's for $69 million. Who is the real creator of these works of art?” Marr asked.
Ethical AI is a question of trust
The participants agreed on the fact that as there are so many concerns regarding AI, creating trust and transparency is vital to companies that are building AI-augmented services.
”We can’t carry on collecting as much data as we can without giving anything back. If we want people to share their data with us, we must think about ways to use AI to help them. For example, a bank could find out that its customer is overpaying for insurance. Perhaps it should share the information with the client,” Marr suggested.
Explainability relates to transparency and is one way to trust. The black box challenge is still there. Self-learning AI can make decisions on-premises unknown to its users – but according to Marr, progress has been made on this and we must push for more explainable AI.
“To reduce biases, the people that build AI must represent the society. We must have a better gender, age and cultural mix of employees in companies building and deploying AI. We need new regulation, and we must update old regulation. One of the best things that an organisation can do is to create an ethics council. They can also recruit people that can help them discuss the ethical challenges and build a set of ethical AI principles,” Marr said.
Progress being made
Work has already been done around sound ethical AI principles. According to Marr, the OECD AI principles are a good starting point for any organisation’s ethical AI principles. The UN Sustainable Development Goals is another set of principles worth striving for. Martinkenaite added the EU ethics guidelines for trustworthy AI to the recommended reading list.
“Developing AI technology is a fine balancing act between accelerating innovation and tackling risks, and businesses need more clarity on their legal boundaries to be able to invest in AI technologies,” said Martinkenaite. She also mentioned the need to maximise investment in the education on AI and build partnerships of governments, educational institutions and industry to work together.
What else did the speakers and panellists have to say? Find out yourself by watching the recording here: Artificial Intelligence and the Ethics Mandate.
1 Comment
Hi, yeah this paragraph is genuinely fastidious
and I have learned lot of things from it on the topic of blogging.
thanks.