In this Q&A with MIT/SMR Connections, Iain Brown, SAS’s head of data science for the United Kingdom and Ireland, discusses some key risks, ethical issues, and platform questions that organizations should consider before adopting AI and takes a quick look at current and emerging AI trends.

Q: In your view, what are some of the biggest risks associated with the development of AI solutions?

Iain Brown: In my view, the biggest risk we see today is how transparent the process leading up to an AI-derived decision is. It’s all too easy for AI to be used in a way that starts to make decisions that a business — and, more important, its customers — may not be happy with. Now the risk here is typically a result of not having adequate safeguards around the process. By that, I mean not having clear transparent lineage to how those decisions have been made and how as a business they can cyclically update those decisions, monitor those decisions, validate those decisions — and make new decisions as more information comes online.

What I typically advise in situations where this arises is the adoption of a structured approach to the deployment of AI. As an example, I have seen several organizations adopt the “FATE” approach — which standards for fairness, accountability, transparency, and explainability
— to good effect. If organizations don’t have a structured approach, they won’t necessarily understand what their models are doing and subsequently can’t explain them. If there’s no transparency, I would argue that businesses are no longer in control, as the decision-making is being done automatically, increasing the risk of bias.

Read more from this series with MIT/SMR Connections

So it’s imperative that organizations considering AI should be thinking about the operationalization [AIOps] piece to generate value, but they should also be thinking about how this will work longer-term when those models start making decisions. We need to keep a view of what those decisions are, so there needs to be a degree of human oversight in this process as well.

Q: That leads right into a broader discussion about AI ethics.

Brown: That’s a very complex and thorny topic. If you’re holding AI to a certain standard,
how do you measure that standard? Where do you put it as a baseline for what should and shouldn’t be done? It’s imperative that organizations understand what they’re developing, why they’re developing it, and how these decisions will impact their customers.

Going back to the FATE analogy, I think that’s core to having a good process and protocols
in place. Ethical decisions are more than just models. And it’s all too easy to simply blame the data — that is, the data itself may not be fit for the purpose, and therefore, bad decisions are made. It goes much further than that. How algorithms are chosen and tuned needs to be considered. Although an algorithm in its own right is not biased, the way algorithms are developed, the way they’re trained, the way they’re tuned, and the way the parameters interact — ultimately, bias does feed into the model along with the data that’s being fed into it.

We are all consumers, and our expectations are increasing in terms of what organizations do with the data that we provide. I think that will only increase with the next generation of consumers.

In using AI, organizations need to be very careful with how they wield what is potentially a powerful tool in a business’s toolbox. They need to fully understand what they’re trying to develop, and they should be validated at every step that it’s making the right decisions for their business and the right ones for their customers as well.

Q: Will most organizations be looking toward a heterogeneous environment going forward, rather than a one-size-fits-all platform?

Brown: I think they will. Having a single platform is great, but that doesn’t mean you can’t have multiple platforms that work together. From a SAS perspective, we have some amazing capabilities, some amazing technology, but we work within ecosystems that have a very wide mix of technology. And they need to be working hand in hand.

I do think the ecosystems will continue to grow in terms of what capabilities are there. But this goes back to a fundamental point: If you’re just adding functionality for the sake of it and not looking at it from a strategic perspective, there’s a risk that you’ll over-complicate decisioning and generate inefficiencies. So I think that organizations will continue to extend and add to the platforms internally, but there still needs to be a joined-up view across those so that you can still get to the roots of the decision, and there should be as much transparency as possible when these functionalities are working together.

Q: What are some AI trends you’re seeing today, and what’s on the horizon?

Brown: We’re seeing a trend towards composite AI adoption. By that, I mean the combination of different AI techniques to solve organizations' problems. I’m seeing a definite increase in the uptake of AI techniques focused around unstructured data, such as natural language generation and natural language processing, to provide much more conversational platforms in terms of what organizations offer their customer bases. We’re seeing that kind of growth trend toward the embedding of AI in systems.

Chatbots are one example. I still think they’ve got a long way to go to truly mimic human-like interaction, and probably 90% to 95% of the chatbots we experience today are still really rules-based — they have a manual list of answers to the key questions that most of us will ask, but there is no adaptive element to that.

In the retail world, we’re seeing cases where there’s an augmented approach to how you’ll be buying clothes or buying produce in the future and utilizing AI for much stronger recommendations, personalization, greater relevance to you personally. They’re viewing customers not as a segment, but as individuals, deciding how to treat them based upon the data they provide.

But that goes both ways. We are all consumers, and our expectations are increasing in terms of what organizations do with the data that we provide. I think that will only increase with the next generation of consumers.

Organizations need to be conscious that if they’re just harvesting information and there’s no benefit in return, people eventually switch off. Ultimately, the organizations that will succeed are the ones that provide something worthwhile, some reward, in exchange for the data that individuals provide to them.

Want to dive deeper? Learn what infrastructure is needed to garner AI success. 

person in photo

Iain Brown (Twitter: @IainLJBrown) is the Head of Data Science for SAS UK&I and Adjunct Professor of Marketing Analytics at the University of Southampton. Over the past decade, he has worked across a variety of sectors, providing thought leadership on the topics of Risk, AI and Machine Learning.


About Author

Kimberly Nevala

Advisory Business Solutions Manager

Kimberly Nevala is a strategic advisor at SAS. She provides counsel on the strategic value and real-world realities of emerging advanced analytics and information trends to companies worldwide, and is currently focused on demystifying the business potential and practical implications of AI and machine learning.

Comments are closed.

Back to Top