I like metaphors. OK, fine, I lied: I love metaphors. They’re my go-to method for explaining concepts in relatable ways, and that’s what makes them so versatile and valuable. They meet people where they are to foster faster, better understanding.

But is there a dark side to metaphors and other linguistic shorthand? When it comes to AI, the answer is that there just might be.

It’s perfectly natural to anthropomorphize AI because how better to describe an action than with its equivalent human behavior? “ChatGPT thinks…” “I had CoPilot consider…” and so on. It helps us grasp complex systems quickly. But it also risks misleading ourselves.

When we misrepresent what’s, in this case, nothing more than a next-word-prediction engine, we don’t just make it relatable. We risk making it human. And that’s where I think things get dangerous – a danger that has consequences but can, perhaps, be avoided by recognizing the power of language.

Selling humans short by exalting AI

In the “AI Myths and Mythos” episode of the Pondering AI podcast, Eryk Salvaggio – a professor in Humanities, Computing and Design at the Rochester Institute of Technology – warns that “if we are comparing ourselves to machines, we are already simplifying significantly what we think humans are capable of.”

Think of the implications of that thought when it comes to ideas like creativity and emotion.

“Asking whether the machine is creative is really missing the mark, because there's just no capacity there,” Salvaggio says. Rather, a human being is making a creative decision to use an AI to do something.

“Because the creative process is not in the machine,” Salvaggio says. “The machine is following a set of steps, and it cannot stray from those steps. So that is the question of the creative process that I think gets to the heart of the issue around creativity and AI in a much more productive and, I think, clarifying way.”

When we casually attribute something like creativity to AI, we risk redefining human traits in machine terms (even if our intent is the other way around). If an algorithm can “create,” what happens to our understanding of creativity?

It doesn’t take long before, in exalting AI, you start selling humans short.

Emulating empathy: Simulation vs. Sentience

Similarly, if a chatbot can “empathize,” what does that say about empathy, another decidedly human trait? Don’t get me wrong: AI can simulate empathy. It can detect stress in your voice or heart rate and respond with soothing words.

But that’s not compassion, that’s pattern recognition.

As Ben Bland, a thought leader in ethical innovation, notes in a Pondering AI episode focused on emotive AI’s shaky scientific underpinnings: “I think (empathy is) a superpower in a way. … And unfortunately, giving that power to machines will be OK only if we're aware of what they're doing and we're also acutely aware of what their objective function is.”

Because while AI is often described as “superhuman,” in the case of reading and reacting to human emotion, it’s only superhuman in capability (like detecting your heart rate) – not in understanding.

In the Pondering AI episode “AI at Work,” Dr. Christina Colclough adds another layer: the quantification of human behavior. AI doesn’t just automate tasks – it turns our actions (and inactions) into data points, which in turn become “truths” about us, used to sell us things, grant or deny us services, or shape our personal and professional opportunities.

Whether we’re talking about the data that makes up your vital signs or your credit score, inference doesn’t equal understanding. This means that just as much as AI can simulate empathy, implementing AI also carries the potential to remove empathy from the spaces where it’s often needed most.

Seeing the human forest for the trees

If we default to giving AI human attributes in the ways we speak about it what comes next?

Artificial intelligence is, in and of itself, a contested term,” Salvaggio says. “But this frame leads us to kind of an overreliance on shorthand in spaces of policy that compare this new type of machine … to human beings. And almost as if they are then therefore deserving of human rights.”

The idea that some suggest AI is deserving of the same rights as human beings makes for a pretty solid argument that the flipside of humanizing AI is, by default, dehumanizing humans. And that how we speak about AI has a lot to do with that.

“This fallacy of scale that kind of comes across in a lot of AI is, well, if you look close enough at the human brain or you look far enough away from the human brain, it looks kind of like an AI system does,” Salvaggio says. “But that's kind of like saying, if you squint at a tree, it looks like a person. Therefore, we should go around chopping down people.”

It’s possible that if we continue to be enamored with the language of anthropomorphizing machines, we might ultimately be chopping humans down.

How’s that for a metaphor?

Learn more about Pondering AI

The podcast tackles topics across the spectrum of society and technology with a diverse group of innovators, advocates and data scientists eager to explore the impact and implications of AI – for better and for worse.

Tags
Share

About Author

Evan Markfield

Brand Content Strategist

Evan tells the stories of how SAS, its partners and customers help the world get more done with data and AI in creative and interesting ways.

Leave A Reply

Back to Top