Much of the discussion about AI centres on what the algorithms can do, as well as the potential for change and/or disaster. But what about the team behind the algorithms? Few discussions have focused on this most essential group, and not just the modellers, but the data sources, the trainers, the compliance teams, and those responsible for the strategic thinking, and the reskilling of people who need to work with the algorithms.
This is a familiar pattern. Over many years, we have seen the same approach with countless new technologies: a sense that introducing and implementing a new technology is all about the technology. This is fundamentally wrong. Like any change, starting to use a new technology is all about people.
Unpicking the proposition
Lest anyone think that this just means providing a bit of training for the most crucial team members, let us unpick this issue a bit more.
First, there is no question that reskilling will be important. If we know nothing else about the likely impact of AI, it is that jobs will change. Some – around 12 percent – are expected to disappear, but there will also be jobs augmented and changed, and plenty of new jobs created, many of which do not currently exist. These will need new and additional skills. Partnerships between business and academia are already starting to develop skills training programs for the future. A bit of on-the-job training seems unlikely to be sufficient to reskill the workforce. Companies will need to think bigger than that, and also start early. It is no good trying to skill up as the technology is introduced.
Companies that lead on the use of advanced analytics, including machine learning and AI, do not just focus on individual skills, however. They have made fundamental changes, breaking down barriers and silos between teams, and encouraging cooperation across departments and divisions. One of the key barriers to break down is that between data scientists and the business. This helps to ensure that analytics is being used to answer important business questions, rather than things that are interesting, but ultimately of little value. Cooperation will be key in the future – but so will an open and sharing organisational culture.
Any company treating AI as “more of the same” therefore needs to rethink completely. The culture around data handling and use is changing. The advent of the General Data Protection Regulation in Europe is forcing a reassessment about how personal data can be used, including for analytics. Combined with changes in what is possible as new uses for AI emerge, it is important that the way that we think about data also changes. In particular, we need to consider how we support better governance and increasing transparency, both for regulatory compliance and for building a stronger, more trusting relationship with customers. This, fundamentally, means cultural change.
Where, though, does cultural change start and finish? It is important that executives practice what they preach, and very definitely “walk the talk.” Individuals who are interested in AI, however, can also act as advocates, sharing information and encouraging discussions. We must recognise that we are all far more likely to believe a peer than a senior manager with a potential hidden agenda. Views on ethics are often built one case – and one discussion – at a time, and individuals have a key role to play in that. There is a place for both top-down and bottom-up approaches to change.
Managing change and changing management
What this adds up to is the importance of considering the implementation of a new technology as a change project, to be managed as such. The introduction of AI will lead to some very real changes in how organisations think and operate, and this means both managing change and changing management methods and thinking.
Companies need a certain amount of analytic maturity before they can even think of getting genuine value from organisationwide adoption of AI. They may be able to deliver a few isolated use cases before that, but this maturity is essential for full-scale adoption. This maturity comes from taking time to embed an analytical approach – which essentially means one of curiosity and cooperation – into the whole organisation, and ensure that the organisation has the necessary technical skills and capabilities.
People are at the heart of any change, whether technological or otherwise. Perhaps the most fundamental aspect, often forgotten until too late, is the importance of getting acceptance for change. With so many ethical questions around AI, this could be the most crucial angle of all.
Adrian Jones’ blog post was inspired by “The 5 Things Your AI Unit Needs to Do” by Alessandro Di Fiore, Elisa Farri and Simon Schneider. Find the original article, as well as those about other important aspects of AI, in the Harvard Business Review report “Risks and Rewards of Artificial Intelligence.”