I’ve spent months traveling and speaking to business leaders worldwide about trustworthy AI and responsible innovation. On the nights I laid awake in unfamiliar hotel rooms, wishing my body clock would adjust faster than it was, I found joy in watching local television in local languages.

While I don’t understand Czech, Dutch or Japanese, I do understand emotion and interaction. The stories portrayed in other languages on these hotel TVs were still clear and familiar: athletes breaking records, health care workers saving lives, academics sharing expertise and celebrities promoting their latest films. These stories reinforced the idea that we are all more alike than dissimilar. They also convinced me that the stories we tell ourselves – the very stories that create our belief systems – are our superpower as humans. Given that, I have a few ideas on how to use our superpower to move the world forward.

Utilize the power of women

First, it's past time that we exercise our superpower to support the belief that women are excellent leaders and technologists. There were far too few women in the audience at all my recent speaking engagements. Since I speak a lot about responsible innovation, I must point out that it’s irresponsible to have so much untapped talent not participating in technology spaces. I get it: there are systemic issues such as social rearing, education, access, religious convictions and the like that lead to such outcomes.

Nonetheless, women are vital contributors to the social, political and economic fabric globally. They deserve full participation to ensure technologies like artificial intelligence (AI) meet their needs on their terms, just like us guys. The stories we tell ourselves about the value of women in business and technology are important, and in need of revision.

Understand that most AI harms are unintentional

After traveling over 49,000 miles to 15 cities and experiencing dozens of cultures around the world, I’m convinced, most people would opt to help people instead of harming them. That story is important for contextualizing harms that we experience. That story is a “benefit of the doubt” that allows us to believe that most harms are unintentional and free of malice, even those caused by AI. That’s no excuse from accountability. That’s no excuse from transparency. That certainly is no excuse from exploitative business practices.

A far larger share of the population still needs foundational knowledge. They need facts about what AI is and what it is not. I now believe one of the greatest services responsible innovators can provide is a campaign to increase global public awareness about AI. Awareness enables adoption.

However, the subtle reframe permits us to “take a beat” which is crucial to acknowledge the humanness of the people who develop and deploy AI. It creates space for understanding their complexities, and potentially encourages collaboration over demonization. After all, human centricity cuts both ways.

Yes, sometimes incentives are perverse and need restructuring. And yes, discriminatory technology outcomes need robust remedies. To move the world forward, we must believe the importance of such remedies coexist with the reality that AI technologists are trying to get it right like everyone else. They want to earn a good living, make a positive impact and care for their friends and families like the rest of us.

Realize that we are biased

Globally we’ve done an excellent job at believing AI can be biased – and it can! Biases are in us and around us and given that AI is trained using data from our biased world, it’s inevitable that biases are reflected in AI output. Practically, biases are central to our humanness and help us make efficient decisions. The problem lies in the fact that we’ve used that human tendency to create social structures that are discriminatory and offensive. AI bias discussions often feature that reality. However, there’s an alternate point of view to consider, and that is the injection of positive bias for justice. Could injecting positive bias into AI help shape a more equitable world we’d all want to live in? It’s a question worth pondering.

Incorporating into our belief systems that we’ve all inherited historical biases affecting our nations, organizations and communities is an important first step. Developing AI that can detect and mitigate biases that predictively will cause harm, especially to those historically marginalized, is the mandatory next step. Anticipating and remediating injury to the most vulnerable is key to securing the trust necessary to move forward in a way that doesn’t replicate a sordid, shameful past.

Develop common knowledge of AI

Finally, perhaps the greatest revelation from my travels came during a conversation with a driver as I returned home. When I shared what I do and why I’m on the road so much, the driver’s reaction to my mention of AI was fear and trepidation. Partly from a lack of facts and partly from the zeitgeist. Such a reaction is, unfortunately, widespread. Only a small percentage of us are so-called experts steeped in the language of AI. A far larger share of the population still needs foundational knowledge. They need facts about what AI is and what it is not. I now believe one of the greatest services responsible innovators can provide is a campaign to increase global public awareness about AI.

Practically, biases are central to our humanness and help us make efficient decisions. The problem lies in the fact that we’ve used that human tendency to create social structures that are discriminatory and offensive.

Awareness enables adoption. While that’s self-serving for a leader at an AI software company, it’s not for the reasons you might think. Public awareness is in everyone’s best interest because the technology is here today, and it’s not going away. Consider this: You didn’t go to school to learn not to stick a fork in an electrical socket. Someone likely taught you well before your school years as with me. It’s common knowledge. How can we begin to develop and share a similar level of common knowledge about AI? We can start by acknowledging that AI is already all around us – in online recommendation systems, voice assistants and fraud detection. It’s not what you see in sci-fi movies.

Suppose folks are still afraid and refuse AI adoption, which may be futile given pervasive back-office applications. In that case, those with nefarious intentions will be even more likely to take advantage of AI. We see it today in the prevalence of misinformation, deep fakes, social bots and automated fraud. One means of protecting ourselves against such misuse is fact-based, common knowledge, which we, as innovators, have in ample supply.

Demystifying AI will improve trust, and that starts with clear, digestible language as we encourage global public awareness. Part of my role at SAS is helping us use our superpowers to build that trust and awareness. I invite you to read more of my blog posts throughout the year and reach out to me at SAS events as I attempt to do just that. Follow me on LinkedIn, too, for more of my thoughts and perspectives.  

Read more from SAS bloggers on equity and responsibility

Share

About Author

Reggie Townsend

Vice President, SAS Data Ethics Practice (DEP)

Reggie Townsend is the VP of the SAS Data Ethics Practice (DEP). As the guiding hand for the company’s responsible innovation efforts, the DEP empowers employees and customers to deploy data-driven systems that promote human well-being, agency and equity to meet new and existing regulations and policies. Townsend serves on national committees and boards promoting trustworthy and responsible AI, combining his passion and knowledge with SAS’ more than four decades of AI and analytics expertise.

1 Comment

  1. Hi Reggie - nice article and something I've been passionate about for a while. My hope is that with responsible AI, we can move past the biases of humans and expose unfairness more easily. Having an AI make consistent , fair decisions would be a great way of bringing trust. However, it will take probably just 1 or 2 high profile poor executions of an AI algorithms to lose that trust.

    Colin

Back to Top