Ethics in AI: 2 sides of the same coin


Artificial intelligence undoubtedly remains a key trend when it comes to picking the technologies that will change how we live, work and play in the near future. As always, with great power comes great responsibility. There are many benefits that AI solutions bring to the world. But poor design or misuse may cause irreparable harm to society. That’s why the development of AI systems must always be responsible and focused on public benefit.

AI ethics is a set of values, principles and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and deployment of AI technologies. Over the years, scientists in the field have outlined the ethical issues of human use of AI. In this article, I will go through the most common dilemmas.

1. Is AI our new Big (data) Brother?

There is a general discussion about privacy and surveillance in information technology. All data collection and storage is now digital. Our lives are increasingly digital. And there is more and more sensor technology in use that generates data about nondigital aspects of our lives. The data trail we leave behind is how our “free” services are paid for. But we can never be sure of the way that trail will be used in the future.

AI technology developers collect, process and utilize massive amounts of personal data. More often than not, they capture and extract big data without gaining the proper consent of the data owner. Quite often, the use of big data reveals – or puts at risk – personal information, compromising individual privacy. The good news is that we have the techniques to protect the data. While requiring more effort and cost, AI systems are capable of processing data inputs while also respecting users' privacy by encrypting all communications, anonymizing the data and ensuring its authenticity.

One of the major practical difficulties is to enforce the regulation, both on the state and individual levels. While the EU General Data Protection Regulation has strengthened privacy protection, the US and China prefer growth with less regulation, likely in the hope that this provides a competitive advantage. Which option is better? Time will tell. By 2024, 60% of the data used for the development of AI and analytics solutions will be synthetically generated.*

2. Bias and manipulation. Can AI be neutral?

One could argue that intelligent machines do not have a moral compass or a set of principles to follow. But without any doubt, these systems are vulnerable to biases and errors introduced by their human makers. One concern is the inherent bias in the data used to set up the system. The output quality of any AI analysis depends heavily on the quality of the provided data – garbage in, garbage out. So if the data is already biased, then the program will reproduce it. And once AIs develop a certain bias toward or against race, gender, religion or ethnicity, it is nearly impossible to get rid of it.

It's important to remember that specific bias is often used on purpose. The capacity of AI systems to make individual experiences better and to personalize digital services has the potential to improve consumer life and service delivery. But it is equally possible to use that information to manipulate behaviour, online and offline, in a way that undermines autonomous rational choice. Given the users’ intense interaction with data systems and the deep knowledge about individuals this provides, they are more vulnerable to manipulation. With sufficient prior data, algorithms can be used to target individuals or small groups with just the kind of input that is likely to influence these individuals. Once again, it's a blessing and a curse.

Many groups have discussed and proposed ethical guidelines for developing and deploying AI. Yet most of these guidelines are created by organizations concentrated in North America and Europe, so the field is biased towards Western values. People who work in AI research, no matter where they come from, need to keep in mind that the development and use of AI span the entire globe. So AI has to rely on values that are universal. By 2024, 60% of AI providers will include a means to mitigate possible harm as part of their technologies.*

3. Why should we trust AI?

Many AI systems rely on machine learning techniques that extract patterns from a given data set. With these techniques, the AI “learns” to capture patterns in the data. The AI labels these in a way that appears useful to the decision the system makes. But the programmer does not really know which patterns in the data the system has used.

On one hand, this can be a big problem when we rely on this technology to make critical decisions such as loan applications, who gets hired, or who gets hit by the car (the famous trolley problem). On the other hand, we should also be aware that AI is already making a lot of decisions in our life – from choosing the next movie we may like on Netflix to blocking a transaction supporting terrorism. As AI systems are getting smarter, so is the nature of threats. They are harder to detect, more random in appearance, more adaptive to systems and environments, and more efficient at identifying and targeting vulnerabilities in systems.

Improved AI “faking” technologies now make what once was reliable evidence into unreliable evidence. This has already happened to digital photos, sound recordings and video. It will soon be quite easy to create “deep fake” text, photos and video material with any desired content. Detecting these malicious attacks will only get harder over time. Is it more ethical to stop doing it because we can’t fully explain how they were targeted? Explainable AI needs to be part of the equation if we want to have AI systems we can trust. But it’s never black or white. In 2023, 20% of successful account takeover attacks will use deep fakes to socially engineer users to turn over sensitive data or move money into criminal accounts.*

4. Will AI replace human workers?

The most immediate concern for many is that AI-enabled systems will replace workers across a wide range of industries. As has happened with every wave of technology, we see that jobs are not destroyed. Instead, employment shifts from one place to another, creating entirely new categories of employment. We can and should expect the same in the AI-enabled economy.

AI will replace specific categories of work, especially in transportation, retail, government, professional services employment and customer service. On the other hand, companies will be freed up to put their human resources to much better, higher-value tasks instead of taking orders, resolving simple customer service complaints or data entry related tasks. Certainly, machines can beat humans in some tasks, but certain tasks machines can’t do at all. The goal of AI is not about humans being replaced by machines, but working alongside them, concentrating on more challenging and satisfying work. By 2023, overall analytics adoption will increase from 35% to 50%, driven by vertical- and domain-specific augmented analytics solutions.*

Is it worth it?

Even having in mind all the unknown and sometimes scary aspects mentioned before, the answer is absolutely YES. AI-based advancements are making our lives easier and better every day. One of the most beneficial uses of AI to mankind is health care. It comes with offering speedy treatment strategies, helps to discover new medicine and test vaccines in a shorter time, monitors patients’ data from wearable sensors and interprets medical imaging. AI is used in banking to create systems that learn what types of transactions are fraudulent, which transfer may be an attempt of money-laundering and to make credit decisions immediately. AI can now write articles, compose classical music, make us an appointment for a haircut, translate a menu when in a foreign country or even predict which beer will be the tastiest for us.

What’s next?

The technology is evolving with time to provide more robust and intelligent systems. AI is becoming a necessary part of our daily life, and much advancement will come in approaching years that will facilitate our lives exceptionally. More and more tech companies around the world are introducing Responsible AI policies – concentrated on privacy, transparency and fairness – so the advancement of AI is driven by ethical principles that put people first. Without any doubt, AI is here to stay. And our task is to make the best out of it.

* All the data comes from Gartner’s series of Predicts 2021 research reports.

Want to learn more about how to mitigate the risks of potential AI misuse? We will discuss the risk of human bias, the explainability of predictions, decisions made with machine learning algorithms, and the importance of monitoring the fairness and transparency of AI applications in an upcoming webinar on April 13. Register here!


About Author

Agnieszka Piechocka

Agnieszka Piechocka works as a Customer Advisor for SAS Poland. She is part of the fraud practice team, advising customers on the use of advanced analytics to prevent fraud and other illegal activity. She is also enthusiastic about getting involved in #data4good projects, and finding ways to use analytics to help humanity.

Leave A Reply

Back to Top