AI has captured the general public's imagination, so it was no surprise that it was nearly the only topic of conversation among data professionals at this year’s Chief Data and Analytics Officer (CDAO) conference in London.

Of course, AI and machine learning are not new concepts for those working in the industry. However, the recent advances in language learning models (LLMs) have made CDOs, CIOs and CTOs sit up. Certainly, there were more questions than answers at this conference.

AI’s impact presents opportunities and divergent views

We are at the point where AI is starting to impact every sector, from finance and insurance to healthcare, government, and many more. This powerful technology has the potential to make life-changing decisions, like who gets hired for a job and who is offered a mortgage. We are also edging closer to a future where AI assists decision-making in the criminal justice system or regularly helps diagnose medical conditions.

Business leaders are increasingly looking to use AI – and no doubt some see it as a shortcut to efficiency, productivity and innovation, especially in a challenging market.

Among delegates, there was a divergence of opinions, with some seeing it as an ample opportunity and others viewing it as more dystopian.

Navigating AI challenges and ethical frameworks

Organisations recognise risks and rewards; some people at the conference said they and their teams used ChatGPT to write blogs and check code, while others were told not to. There was, understandably, much fear around the risk of inputting confidential company information into these tools and the quality of data for decision-making.

At the moment, some feel that AI alone isn’t enough to make smart decisions. It’s a classic case of the correlation-causation fallacy. For example, the AI might suggest that shark attacks increase because ice cream sales have increased due to warm weather.

That might happen if users rely on simple ‘if-then’ AI models, which can mislead. However, the availability of advanced AI systems and model governance means they can now derive highly accurate and relevant insights from the data.

So, even among AI advocates, there’s a degree of healthy scepticism around the technology. While there is an appetite for AI, CDOs are also concerned about whether their data is good enough quality. They discussed what safeguards could be implemented, like using synthetic data to optimise AI models without compromising privacy or risking bias.

Regulation was another topic of conversation. We know that new laws might be coming to the UK, with the EU having passed its AI Act, the first of its kind by a major regulator. But there were also many questions about how governments would regulate something as fast-moving as this.

What was reassuring is the level of interest in trustworthy AI, suggesting that the industry was keen to lead on best practices rather than waiting for legislation.

It was a subject discussed in detail by SAS data scientist Prathiba Krishna during her talk at the event. She pointed out that the speed of AI decision-making is so fast that the impact of poor decisions is much bigger. Unintended harm – like bias – can now happen on a huge scale, and with everyone in an organisation using AI, who’s to blame if something goes wrong?

She underlined the business imperative for developing an ethical and compliant framework to enable sound decision-making and avoid the reputational damage linked to bias. She highlighted how the SAS principles – human-centricity, transparency, inclusivity, accountability, robustness and privacy and security – could help organisations take advantage of AI's benefits while mitigating the risk. While we do not know yet what regulation will look like, these principles will stand teams in good stead for ensuring the integrity and trustworthiness of their data and models.

Looking ahead to the future of AI use

We’ll likely see CDOs and the wider senior technical teams become the gatekeepers of AI policies and strategies. Like it or not, AI is being adopted on a large scale by the general population, including members of the workforce, so it is the responsibility of technical teams to exercise due diligence in choosing the right partners – those who develop their technology within a robust ethical framework.

It will be interesting to see how much the conversation has moved on in a year and whether today's burning questions will be answered.

If you want to discuss this or any other key topics related to AI, analytics, and cloud, join our Leader’s Network – an exclusive network for senior executives looking to grow, innovate and transform their organisations.

Read more stories from SAS bloggers about responsible innovation.

Share

About Author

Adam Troman

Senior Customer Success Director, Customer Success

Adam is a proven business leader with international experience in the tech and digital marketing space. Adam is currently the Senior Director of Customer Success at SAS, an advanced data, analytics, and AI company. With expertise in management, customer relations, strategy, and coaching, he has managed Annual Recurring Revenues of over $250M, achieving Retention Rates of over 96%.

Leave A Reply

Back to Top