I have a confession to make. I’m a gaming enthusiast. Possibly even a gaming nerd. I really love computer games, and not just to play them. I’m interested in how they are made, how they work, and how they might develop in future. I’m particularly interested in how artificial intelligence (AI) is used in games and also what it can teach us about the use of AI in analytics.
Understanding AI in games
First of all, it is important to understand that AI in games is (mostly) not really the same as the AI that we talk about in analytics, because it doesn’t really learn. There are a few games that use genuine AI, such as AlphaGo, the computer that learnt to play Go and then beat the world’s best players. That was “taught” using thousands of previous games of Go and continues to learn every time it plays a new game against a human.
Most games, however, don’t really learn like that. Instead, they are what you might describe as “sort-of AI.” They have been carefully programmed to respond in different ways to a wide range of activities. Some are pretty basic and have a fairly small range of activities to choose from – for example, shoot at someone, run away, find a way of getting more lives, or wander around the game until they find another player. More sophisticated games can use decision trees to select the most appropriate action based on its likely outcomes.
They do not, however, genuinely learn, for two main reasons. Both these have implications for the use of AI in analytics.
First, an AI that genuinely learns changes the game and takes control away from the designer
There are examples of AI games that learn. For instance, there is creature-training game where the player raises these creatures that learn in different ways depending on how they are trained. Each “pet” develops very differently. Designing such a game puts the AI at the centre of attention, which is not what most game companies want. They usually have a plan for how the game works and the desired player experience. Once the game starts to learn for itself, the designer is no longer in control, and as we saw with Microsoft’s Tay, almost anything can happen.
Second, an AI that learns would eventually win – and that’s not good for user experience
As we saw from AlphaGo, an AI that learns from its experience will eventually win. It can compute options and outcomes far faster than a human and will therefore sooner or later have enough information to win. And once it has started to win, it will keep winning. Most game designers place limits on the game’s intelligence to ensure that the player has a reasonable chance of winning. Why? Because user experience is the most important thing in gaming. If users don’t like the game, they won’t come back, and they certainly won’t buy the sequel.
Implications for AI in analytics
Let’s look at the implications of the first point. In business there is no such thing as allowing the AI to take control away from anybody. Decision makers base their decisions on facts, and legal institutions require them to explain why they were made. Banks and lenders, for example, have to be able to justify lending decisions, and “computer says no” is not sufficient. To be able to trust the algorithm, you have to know what exactly went in, and what exactly is happening inside it. Once you lose control – when the algorithm learns for itself – then that is going to be at best extremely difficult. This is a major challenge for the adoption of AI in business. As in games, when you design an AI nowadays, you will have to be very careful to limit what it is allowed to do. Losing the “players” – meaning generating distrust in the decision makers who want to benefit from the algorithms – will render these algorithms useless.In business there is no such thing as allowing #AI to take control away from anybody. But can we learn from AI games to unleash more potential? #gameai Click To Tweet
The second point is equally instructive. In gaming you have the option to give the AI less information because the game world you designed has a limited number of parameters. In real life the world the AI lives in is much bigger – nearly unlimited, in fact – and you simply cannot provide all possible parameters. In both cases you try to provide the best possible set of information that – objectively – leads to the AI not being able to make the most optimal decision, but allows for a controllable user experience. As people will only come back to your game when they can expect to have fun with it, leaders will only continue to adopt AI in business processes when it supports their business needs in the most optimal way.
In any case, putting an AI in the centre of any world, the real business world or a simulated fantasy world, certainly leads to challenges that need to be faced, and these challenges are very similar. An AI needs to be limited to fit. We cannot unleash its full potential yet. Will we ever be able to?