We pay a lot of attention – and rightly so – to ways in which emerging and evolving technologies like AI might reinforce existing biases or inequities based on what data the models are trained on.
But there’s a flip-side concern that doesn’t get nearly as much notice: What if, rather than only reinforcing problematic moral and ethical viewpoints, AI also has the potential to create entirely new ethical and moral possibilities that reshape what society considers normal?
That idea is at the heart of a recent conversation from the Pondering AI podcast, in this case from an episode featuring John Danaher, a senior lecturer in ethics at the NUI Galway School of Law.
Danaher discussed research he collaborated on that explored how technology might reshape our ethical principles and frameworks – a twist on the typical thinking where responsible innovation focuses on ensuring AI conforms to or reinforces existing ethics.
“In academia, the typical direction of analysis is to use existing ethical norms and principles to evaluate technology. And to a large extent, that's what the entire field of AI ethics does,” Danaher told Pondering AI host Kimberly Nevala. “So it just seemed like an obvious thing to do, which is to just do something that's slightly different from what everyone else was doing.”
The paper resulting from that research outlines six mechanisms through which the authors think technology can change social/moral beliefs and practices. Danaher’s hope is that people designing new technologies could look at their work through the lens of those mechanisms to consider whether any such changes would be a good thing or a bad thing.
What makes it interesting is it’s not just guarding against a direct negative impact – like bias, for instance – but secondary consequences that could reshape how people judge things entirely. And then, naturally, asking: What’s the value judgement on those potential changes?
Learn more about Pondering AI
The podcast tackles topics across the spectrum of society and technology with a diverse group of innovators, advocates and data scientists eager to explore the impact and implications of AI – for better and for worse.
Does AI work ... at work?
And as other guests on Pondering AI have proven, Danaher isn’t alone in thinking about these downstream effects beyond simply adhering to or violating society’s current ethics or morals.
Take Matt Scherer, senior policy counsel at the Center for Democracy and Technology, who joined Kimberly to discuss the effects of implementing AI in the workplace.
Whether that’s deciding how employees should use AI or considering the ramifications of automating already flawed human processes like hiring and firing, it’s pretty clear that this technology has the potential to change how we treat people in the workplace.
Scherer recalled a conversation with an HR tech vendor who argued that bringing up concerns with using AI for hiring didn’t carry the same risks as something like a self-driving car because “nobody’s dying as a result of an AI system that is doing HR tasks.”
Safe to say that “it’s not bad because nobody died” is a pretty extreme shift in morality from what’s considered the norm by any reasonable definition. It really reflects how we (and does “we” include the AI system now?) can quickly start treating people differently when we take people out of the equation.
Are we getting into a complicationship?
And outside of the workplace, what happens when AI removes people from out of how we interact socially? It’s been a hot topic recently, with stories about people becoming addicted to ChatGPT or relying on AI for companionship.
In another recent Pondering AI episode, Dr. Maria Tschopp – a psychologist and a human-AI interaction researcher – dove into that topic and revealed an interesting pair of ways those kinds of AI bots can change our moral and ethical behavior.
First, the products themselves, obviously. Using the example of an “AI friend” necklace, Dr. Tschopp mused about how wearing such a device might impact a person on a date. Would they talk to their date or the bot? Would the AI be “on” the date with them? Either way, one thing is clear: AI companions can significantly change how humans interact with one another.
Secondly, how we judge people’s use of that technology. By calling an AI companion good or bad, you end up judging the user by extension, because if they like it, that must make them weird or wrong. So here, even the analysis of the tool can change how we behave toward fellow humans.
Back to Danaher’s earlier question about creating a framework for evaluating these “advancements,” it’s kind of ironic that the fact we tend to judge using a technology in certain ways as “good” or “bad” might be an argument against using that fact to determine if that technology actually is good or bad.
Rights and wrongs
The questions aren’t limited to the effect on people, either.
Take the concept of “machine rights,” which was one of many topics Kimberly discussed with international human rights lawyer and author Susie Alegre in the “Righting AI” episode of the podcast.
Alegre’s work focuses on how technology development affects human rights, and she makes the point that even addressing the academic question of so-called robot rights has the potential to be problematic.
By giving machines “rights,” it’s potentially a way for humans to avoid or opt out from making moral and ethical justification for the technology they build. “It's acting God and letting these things go off and do all the terrible things that creations do while abdicating responsibility,” she says.
Minding our morality
So what do all these complex questions have in common?
Well, it seems like with any AI-related ethical question we ask, we end up opening a can of worms – only to learn that can is full of even more cans of worms. Fun, right?
Time will tell what the answer to all of these questions (and their inevitable follow-ups) are … maybe. Realistically, they might just be problems to interrogate as we go along more than they are ones to “solve” definitively. Especially since the goalposts are unlikely to remain firmly planted in one place as tech evolves at the rate it does.
So really, the call to action we need to give to ourselves – whether as creators, users or simply observers of innovations in AI – is this: Keep our eyes on the ways norms are changing or have the potential to change, not just face-value judging whatever’s grabbing a headline that day.
Because as the tides of information go in and out day after day, it can be hard to tell if the moral and ethical sands are shifting permanently under our feet as well.