Artificial Intelligence (AI) adoption: similarities, differences and concerns

0

Technologies have come and gone since the first Industrial Revolution. We have always (or almost always) started out suspicious, then absorbed them, used them and built on them to do more work, more efficiently. Somehow, though, artificial intelligence (AI) is different. It feels like there is more fear about its impact, and more concern about how it will be used, than with other technologies.

Issues with AI

Artificial Intelligence has perhaps arrived less conspicuously than most technologies. PCs, phones and older technology were generally decades in the making, with several incarnations, and we had time to adjust. AI has also been decades in the making, but until quite recently, not very effective – and used mostly by specialists, so that it has not become familiar. It feels like it has burst on the scene, with effective chatbots and algorithms suddenly popping up all over the place. Artificial Intelligence is also a huge issue for jobs and job security. The general feeling is that it will create new jobs, but also make large numbers of others unnecessary. A number of studies suggest that overall the job market will remain about the same size. The problem is that we simply do not know what the new jobs will look like, or how much retraining will be necessary to move from old to new. This makes us feel more nervous about whether and how our jobs will be affected. Finally, there are ethical issues. Most of us have grown up with science fiction stories and films like I, Robot. We know – or we think we know – what can happen with AI. While the truth is likely to be a long way from fiction, there are still justifiable concerns about how the mountains of data about each of us will be used by AI systems. It will certainly take time for Artificial Intelligence to earn our trust.

AI

Regulating AI

Is regulation the answer? And if so, how should (and could) AI be regulated? Should regulations focus on the AI aspects, or simply require data protection, transparency and fairness, regardless of methods used? The latter seems more sensible. It is already possible, for example, to regulate how data is collected,
stored and used
. On request, companies must provide transparent and clear explanations of the basis for any decision making. Perhaps we could extend this to require pre-emptive explanations of AI systems and their functioning. But perhaps we are missing the point. Whether a decision is made by an algorithm or a human, the basis for it should be clear and transparent. There are questions, however, about how an AI system might learn, and whether the basis for algorithmic decisions will remain clear as the algorithm develops. Further regulation may be necessary, but the onus is surely on companies using AI systems to ensure that they understand what is happening, and monitor their systems very carefully. What is certainly clear is that AI cannot be “left alone.” A driverless car killed a pedestrian recently, for all that this is supposed to be impossible. Another failed to spot a lorry last year, and its human driver was killed as a result. AI must be monitored and managed to ensure that it continues to do what we want. And that, surely, must be the responsibility of companies using it, and not the regulators.

Taking responsibility for change

How far, though, should this responsibility go? We might also suggest that companies that use and embed Artificial Intelligence from an early stage – for example, big tech companies like Google, Facebook and Amazon, or big software vendors – should be required to engage with the societal consequences of Artificial Intelligence too. Perhaps we should expect them to create training programs for some of the redundant workforce, or create a certain number of new jobs each year, or possibly engage with educators to increase understanding of AI and ethics. They might reasonably see this as being in their interest, as a way to ensure that the required workforce is available to them, and that ethical questions are resolved. If not, however, would we want to force them to take these actions?

How far, and how fast?

These questions suggest that perhaps the biggest question is not whether we need to regulate AI, but how far we can expect companies using and benefiting from it to engage with the consequences, and how far governments will need to step in. Regulation and supervision is necessary, but will it be voluntary or enforced? Perhaps the best solution at present is to wait and see what is necessary.

Educating ourselves is the best way to manage concerns

At the end of the day, the answers are surely not easy and cannot be answered by a small group of people or organisations. Everyone affected by Artificial Intelligence should start to gain the basic knowledge and get involved in this conversation, because mere fear does not accelerate advancements. We all need to understand and think about AI, which is fundamentally changing our lives and industries.

For more information about Artificial Intelligence get the free e-book The enterprise AI promise: path to value.

Tags
Share

About Author

Massimiliano Cea

Senior Associate Presales Engineer, AI & Advanced Analytics

Massimiliano Cea holds a Bachelor's and Master's Engineering Degree from Polytechnic of Milan with an exchange experience at Yonsei School of Business in Seoul. He is strongly motivated by the challenging opportunity of using big data and analytics to improve businesses and people’s life. He works in the Italian and EMEA AI & Advanced Analytics team as a Sr Associate Presales Engineer and supports organizations from different industries to leverage their data and make better business decisions.

Leave A Reply

Back to Top