I recently had two incredible opportunities: to visit the White House for a landmark executive order signing and to make remarks at a US Senate AI Insight Forum.

The AI Insight Forum was part of a bipartisan Congressional effort to develop guardrails that ensure artificial intelligence is both transformative and sustainable.

As a member of the National AI Advisory Committee, I was grateful to have the opportunity to address the US Senate’s bipartisan AI Gang of Four, including US Senate Majority Leader Chuck Schumer and Senators Martin Heinrich, Mike Rounds and Todd Young at the Russell Senate Office Building. The forum came two days after I visited the White House to watch President Joe Biden sign the Executive Order on Safe, Secure and Trustworthy Artificial Intelligence.

Given the importance of the efforts to ensure AI balances inequities rather than increases them at scale, I wanted to share with you my official remarks delivered at the AI Insight Forum:

Thank you to Majority Leader Schumer, Senators Heinrich, Rounds and Young and the excellent staff who organized today’s event. I am grateful for the opportunity to speak with you all and for your leadership in understanding and addressing this critical issue.

I’m encouraged by the fact that the conversations happening here and throughout the federal government are thoughtful, deliberate, and nonpartisan. Because of its potential ubiquity nationally, and internationally, I welcome and urge that we continue this approach as it will yield the greatest benefit for us all.

While elaboration will follow, note my comments will focus on the importance of:

  • AI literacy – specifically, the need to provide a foundational level of education for all Americans about AI. Everyone should understand AI basics to ease fear and anxiety, while empowering all to make choices in their best interest with respect to the technology. AI should not be “done to us” but “done for us” and it takes an educated public to know the difference.
  • Inclusive contribution – Advances in AI enter a world that does not promote agency, equity, and well-being for everyone in equal measure, despite our intentions, and AI risks increasing inequity at scale. AI is not a product but a lifecycle of data usage, and people are a single point of failure. Many people don’t trust the technology, its providers, or the leaders attempting to regulate it. To gain the trust from those we all want to serve, particularly people historically underrepresented in technology prosperity, the design, development, and deployment of AI must be more inclusive. Plainly stated, value our contribution as much as you value our consumption.
  • Demonstrable trustworthiness – Trustworthy AI begins before the first line of code is written. To engender trust, the government should aim for standards that are simple, yet robust, that include capabilities like bias detection, explainability, decision auditability, and model monitoring. Given the likelihood of AI models to decay over time, standards must minimally account for data used to train models, the process and people involved in creating models, as well as the model’s intended use and audience. We have nutrition labels for our food, and we should have similarly comprehensible labels for our AI.

AI literacy – For us, not to us

In an AI enabled digital world, we all benefit from becoming AI literate. Like electricity where each of us has a basic understanding of its principles, the need for a basic understanding about AI is upon us, and the government has a role to play.

Importantly, AI literacy is not only a matter of workforce readiness but is necessary for non-workforce participants alike. Citizens of all kinds are interacting with, evaluated by, surveilled, and influenced by AI every day at an increasing rate. And while most won’t choose to become advanced AI researchers, they should understand how we all produce data and how it’s collected, analyzed, and fed into AI models.

They need to understand the potential for confirmation and automation bias, as well as the need for vigilance with respect to AI being used as a tool of deception. Only then will citizens be able to weigh the risks and value of AI for themselves.

The recent Biden Administration Executive Order on Safe, Secure and Trustworthy Development of AI catalyzes various agencies with respect to AI literacy in the federal workforce. That is necessary and should be expanded to include the broader workforce and non-workforce participants as well.

Ideas for consideration include:

  • Creating a National AI Literacy Campaign to create engagement and awareness about AI throughout the nation.
  • Investing in formal educational or existing learning frameworks to advance the AI literacy of the American population.
  • Investing in informal learning opportunities such as standalone public sessions, social media campaigns, and public messaging efforts.

We need to think broadly about what it means to be functional in an AI enabled digital world. Opportunities will likely be plentiful and rewarding, but as SAS Founder and CEO Jim Goodnight reminded us on International Literacy Day, what it means to be “literate” in today’s world isn’t just confined to letters and words anymore.

Related: Strategies for maintaining trust in our AI-driven world

Inclusive contribution – “People are a single point of failure”

Most AI practitioners are working in good faith to do what’s beneficial, legal, and profitable. Most have no desire to harm. However, impact disregards intention. Absent broad participation and a wider spectrum of perspectives, limited points of view lead to harmful outcomes as we’ve seen increasingly over the past couple decades of AI proliferation.

Across all major demographic groups, Americans are increasingly more concerned than excited about artificial intelligence (52% in August 2023 vs 38% in December 2022). Americans are also among the populations least trusting of technology. In multiple studies, potential job loss, misinformation, and fundamental change to American society were cited as reasons for concern.

Such concerns are particularly acute in communities historically underrepresented in the design, development, and deployment of technology, further crystallizing decades of distrust. AI is likely to have an outsized effect on all our lives so everyone should participate in its design, creation, and sustenance, not only the consumer demand phase of the lifecycle. AI inclusion will better inform ethical inquiry, aid in reducing harmful biases, and build confidence in the fairness of AI.

Inclusive AI is about more than diminishing the negative. It’s also about accentuating AI's immense potential to enable a more productive and equitable society. Achieving that end will take competence, resilience, and a willingness to exist in the “messy middle ground,” where the potential for AI intersects with the realities of our past, present, and desired future that we all must have a part in designing. Working in the messy middle means we don’t ignore the challenges AI presents us, but we recognize those challenges and work to overcome them.

Ideas for consideration include:

  • Involving inclusive domain expertise before applying AI in specific high impact contexts such as health, finance, and law enforcement, all areas where inequities have harmed people in the past, and bias remains a serious concern.
  • Funding the National AI Research Resource and other such methods of lowering the economic barriers of entry for AI.
  • Incentivizing traditional and non-traditional workforce education pathways for technical, and importantly, the non-technical talent needed for more robust AI systems.

Demonstrable trustworthiness – “Before the first line of code”

Doomsday AI predictions grab headlines but distract us from more tangible and immediate concerns.

Real risks already exist, and have for some time, in areas such as health care, public safety, or banking. If done wrong, the use of AI in those areas, and others, could have profound consequences, especially for those who are already vulnerable. Focusing on the “end of humanity,” with no evidence, over the existing risks, based on ample evidence, is serving to further erode trust.

That said, it would also be shortsighted to discount the concerns of so many AI experts. We should certainly commit resources to exploring those extreme threats based on how probable they are while committing the bulk of our energies toward the problems of today.

One way to mitigate immediate challenges is by providing a means to “trust but verify” AI. Trustworthy AI should be an end-to-end process, from capabilities ideation to sunset. AI providers should provide a means to measure and monitor performance. Systems should be auditable, with understandable reports illustrating if an AI model overstepped or underperformed its intended use. Because bias can take many forms throughout the AI lifecycle, providers should be able to identify potential bias risks during data management, model development, and system deployment until it is retired.

Ideas for consideration include model cards that summarize a model’s training data, intended use, and performance. Similar to nutrition labels for food, model cards are an appropriately transparent method of demonstrating for consumers, creators, and regulators alike that AI models are supporting their responsible, ethical, and trustworthy AI goals.

About SAS – A mature voice

As a responsible innovator, SAS seeks to build trust in technology. As one of the first AI and analytics companies, SAS is behind many advances that have transformed the way the world uses data.  Founded in 1976, SAS has seen the evolution of the analytics industry since its beginning. SAS has been a mainstay in data and statistical analysis in all major industries for decades.

It’s important to remember that AI is not new. SAS has been working with neural networks, machine learning, computer vision, and other forms of AI for many years. As a nearly-50-year-old company that’s committed to responsible innovation, we believe we offer a valuable voice.

Learn more about our commitment to responsible innovation and trustworthy AI

Share

About Author

Reggie Townsend

Vice President, SAS Data Ethics Practice (DEP)

Reggie Townsend is the VP of the SAS Data Ethics Practice (DEP). As the guiding hand for the company’s responsible innovation efforts, the DEP empowers employees and customers to deploy data-driven systems that promote human well-being, agency and equity to meet new and existing regulations and policies. Townsend serves on national committees and boards promoting trustworthy and responsible AI, combining his passion and knowledge with SAS’ more than four decades of AI and analytics expertise.

2 Comments

  1. This is a great overview of the defendable barriers and guidelines needing to be developed within existing and evolving AI. Thanks for your insight. Keep up the great work .

Leave A Reply

Back to Top