How AI is re-defining the scope of ethics

6
How AI is re-defining the scope of ethics
It’s important to have impact assessments and ethical reviews in place to make sure that AI is being used in a way that’s conducive to a positive work environment as well as cost savings.

Since the 1950s, with machine learning, deep learning and cognitive computing developing over time, for some of us AI has been around for a while. What has changed recently is perhaps the sheer volume of data available. This has made it both possible to train AI-based models in a way that was not previously achievable, and also essential to use AI models to make sense of the world and these vast quantities of data.

This, in turn, has meant that the impact of AI has suddenly grown, on both processes and individuals. At the same time, we are having to develop a better understanding of how to govern and manage AI systems in an ethical way, to ensure that we keep up with technological developments. In support of this a recent study, “AI Momentum, Maturity and Models for Success,” conducted by Forbes Insights indicates that ethics is now front-of-mind for most companies as they consider how to use AI. This is testament to the impact AI is going to have on the market and as such, it’s important to have impact assessments and ethical reviews in place to make sure that AI is being used in a way that’s conducive to a positive work environment as well as cost savings.

Impact on the workforce

There is no question that AI will have a huge impact on organisations. A recent Forbes study, for example, suggested that it could lead to 38% profit gains by 2035. By anyone’s standards, that is a big increase in profitability. Making those gains is going to require significant changes in how organisations operate, largely driven by automation. From the point of view of the workforce, these effects are likely to be both positive and negative.

There is no question that #AI will have a huge impact on organisations. A recent Forbes study suggests that it could lead to 38% profit gains by 2035! Click To Tweet

A number of commentators have suggested that AI will replace workers. In other words, machines will take over work that is currently done by humans. We can think of processes as following steps from listening and sensing (or data gathering) through making sense of the data, then acting on it. Much sensing is already done by machines, with sensors now in systems from cars to factories and beyond. The biggest change is therefore probably going to be in the process for understanding and acting upon the data. Commentators focused on the negatives have stressed that this could result in large-scale redundancies, but others have suggested that new jobs will emerge to manage and work with AI systems.

Perhaps the biggest question about this is how we will ensure that the machines that replace humans do what is expected and wanted. In other words, what is the process for governing AI applications and making sure they operate effectively?

Read Next: 3 essential steps for AI ethics

A framework for ethics

One framework for considering ethical AI development is known as FATE. This acronym stands for:

  • Fair, or removal of bias and corporate discrimination. The system must help to remove human bias, and not build in new machine biases of its own. The early face recognition models developed by Silicon Valley, for example, tended to be very good at recognising white faces, but not so good at Asian or African American faces. This exposed the nature of the dataset: the types of photos available online for training the model. The question to consider here is: who decides that AI outcomes are ‘right’? The answer often depends on the industry and context.
  • Accountability, or ownership of decision-making, and the willingness to take responsibility. Organisations must consider where accountability for decisions sits, and this often comes down to culture.
  • Transparency, which for AI means avoiding ‘black box processes’, and being clear that you have a good understanding of the process from start to finish. This allows better trust in the decisions made by the system.
  • Explainability, or the ability to explain and make sense of the decision. This is about model interpretability, which must be at the heart of all models. It is closely linked to transparency, because a transparent process is likely to produce models that can be explained. For models there is usually a balance between accuracy and interpretability: more complex models may be more accurate, but are also usually harder to understand and explain.

These four elements create a framework for effective governance of AI systems.

Augmenting human efforts

One of the positive aspects of AI in the workplace is likely to be that it frees people from ‘grunt work’, and allows them to do more rewarding work. The recent Forbes Insights study indicated that 64% strongly or completely agree they are already seeing the effects, as employees focus on more strategic tasks rather than operative ones, thanks to AI. Some of this work is expected to be in working with the new AI systems to ensure that they are effective.

AI systems have the huge advantage that they can augment human efforts, but sometimes human efforts are also needed to augment AI. AI systems can be developed to self-govern, but they still need overseeing to ensure that their governance decisions are correct. Many of these decisions will boil down to the question not of what is possible, or what can be done, but rather what should be done, and that is a very human question.

Ethics featured highly on a recent global survey that Forbes undertook on behalf of SAS, Intel and Accenture – read the full report here.

Share

About Author

Iain Brown

Head of Data Science SAS UK&I / Adjunct Professor of Marketing Analytics

Dr. Iain Brown (Twitter: @IainLJBrown) is the Head of Data Science at SAS and Adjunct Professor of Marketing Analytics at University of Southampton working across the Financial Services sector, providing thought leadership in Risk, AI and Machine Learning. Prior to joining SAS, Iain worked for one of the largest UK retail banks in the Risk department.

6 Comments

    • Iain Brown

      Hey Michael, a philosophical point. Finland already implement a universal income as a solution to automation, fewer jobs and lower wages.

  1. “Ethics and morals relate to “right” and “wrong” conduct. While they are sometimes used interchangeably, they are different: ethics refer to rules provided by an external source, e.g., codes of conduct in workplaces or principles in religions. Morals refer to an individual's own principles regarding right and wrong.”

    This paper survey people whose view of the ethics of AI includes only market values. The current use of AI, as evidenced by the behaviour of the largest deployers of AI, google, Facebook, ..., prioritises market values over moral principles. However, the paper does not identify this bias and limitation.

    • Iain Brown

      Hey Michael, thank you for your valuable feedback, certainly a valid point and something to be considered in future studies perhaps.

  2. You seem to have overlooked the fact that some people do not and will not have the mental aptitude to work with AI systems and the 'grunt work' as you put it, is all they would be able to do. What happens to these people? Out of the thousands of redundancies caused as a direct result of AI, the vast majority will fail to find work that suits their mental and\or physical capabilities. There will be nothing ethical about the support they will not receive and when homelessness suddenly hit records levels. It won't be ethics that will be needed to solve the problem and all the other social problems that AI will inevitably create.
    In my opinion, the introduction of AI will be first and foremost for the gains in profit. Ethics will play a bit part as long as the profit margins remain high, everything and everyone else will be inconsequential.

    • Iain Brown

      Hey Simon, the ‘grunt work’ in this instance specifically refers to the repetitive or operative tasks that if removed could free an employee's time to focus on more strategic ones. I personally believe ethical AI should do right by society as a whole and if used in the right way should empower all employees.

Leave A Reply

Back to Top