One of the reasons I got involved with the trustworthy AI movement is because automated systems enabled by our past will hurt people – at scale – if we aren’t careful. Worse yet, and from a personal perspective, it concerned me that if such systems were deployed in justice and public safety settings and healthcare environments, based on historical precedent, the possibility of harm to African-Americans alarmed me. I knew something needed to change and I wanted to be a part of that change.

Lack of knowledge too often breeds fear, and once involved, I learned to fear less. At the same time, there is sufficient cause for concern. I engaged with a vibrant community of practitioners navigating this exciting technology's ethical and practical implications. I found there are few absolute “rights and wrongs.” Rather, a closer look reveals a variety of ethical, graduated scales full of moments of fragility, where not only my community but every community is vulnerable.

As the U.S. celebrates Black History Month, I’m particularly sensitive to my vulnerability, the long-tail effect of historical oppressions and diminishing these negative influences for the next generation. However, trustworthy and responsible AI is about more than diminishing the negative. It’s also about accentuating AI's great potential to enable more productive and equitable societies.

Achieving that end will take competence, resilience and a willingness to travail the “messy middle ground,” where the potential for AI intersects with the realities of our past and present. Navigating the messy middle is complex and nuanced but crucial. The messy middle is where we recognize the inherent risks in AI technologies and work to overcome those risks so that all of us can benefit from the rewards of AI.

As we celebrate Black History Month: It’s time to get in the game! AI is here, it’s not going anywhere, and it will have an outsized effect on your life if you don’t participate in its design, creation and sustenance.

Before deploying AI, ensuring it is beneficial during the moments that matter, especially for the most vulnerable, requires an examination of our social, civic, academic and corporate structures and the incentives that propel them. Hear me loud and clear – this is not a call to dredge up the past to shame people but to seize the opportunity before us to enable the prosperous future we desire. A future with encoded discrimination, bias and unequal access only perpetuates the worst of us, breeds technology distrust and vastly limits progress.

Examining the past for a better future

Well-intended, reasonably informed people around the world accept a fact: There are disparate outcomes disproportionately correlated to race, gender, ethnicity and physical ability worldwide. Particularly acute in the United States, minoritized populations are impacted by laws, social norms and business practices, many of which were overtly discriminatory at one time in our history. Stated otherwise, some have profited from the oppression of others. Times have changed for the better. However, a genuine examination shows far too many disparate outcomes still exist, primarily because those laws, norms and practices have a long tail effect, and encoding those same laws, norms and practices into our digital lives will only intensify them.

One example is the deployment of facial recognition technology. Studies have shown that facial recognition technology is less accurate for people with darker skin tones due to a lack of diversity in the dataset used to train the algorithm. This reduced accuracy leads to invisibility or misidentification, which are dangerous in high-stakes scenarios like healthcare and policing.

Similarly, predictive algorithms trained on historical data can perpetuate racial bias in the justice system. Lending algorithms trained on historical data can perpetuate gender bias. Automating business operations with a reliance on interacting with machines can isolate senior citizens and the physically challenged. The list can go on and on.

Working in the messy middle means we don’t ignore the challenges with technologies like facial recognition and predictive algorithms trained on biased data. We recognize those challenges and work to overcome them.

Hear me loud and clear – this is not a call to dredge up the past to shame people but to seize the opportunity before us to enable the prosperous future we desire. A future with encoded discrimination, bias and unequal access only perpetuates the worst of us, breeds technology distrust and vastly limits progress.

I’m an innovator and far from a Luddite. In many cases, we should deploy AI, especially when it aids quality of life. However, I also believe that before doing so, we have the opportunity, in fact, the duty, to rethink structures and address the moments of fragility that these systems may exacerbate. This may require disrupting those structures and dealing with the social consequences of leveraging AI as a force for the equitable distribution of resources required to thrive in a 21st century world.

Your role in the AI revolution

One final thought for those in minority populations, especially those in my community, as we celebrate Black History Month: It’s time to get in the game! AI is here, it’s not going anywhere, and it will have an outsized effect on your life if you don’t participate in its design, creation and sustenance. Most engineers, data scientists, ethical AI practitioners and the like are working to do what’s legal and profitable. Most have no desire to harm. However, their points of view are limited, and your participation broadens that view; it makes products better, services more robust, and people more accountable. We need your voice in this space!

To all the leaders and practitioners in this space, it's our duty to constantly examine AI's impact on society. As we continue to push the boundaries of what's possible, we must remain vigilant in identifying and addressing potential biases or discriminatory outcomes. This requires a deep understanding of the technology and a commitment to creating a more just and equitable society. It's not enough to have one-off conversations about these issues; we must make it a continuous and integral part of our discourse. By staying attuned to these concerns, we can ensure that we're not only pushing the boundaries of technology but also fostering a more inclusive and equitable future for all.

Read more from SAS bloggers on equity and responsibility

Share

About Author

Reggie Townsend

Vice President, SAS Data Ethics Practice (DEP)

Reggie Townsend is the VP of the SAS Data Ethics Practice (DEP). As the guiding hand for the company’s responsible innovation efforts, the DEP empowers employees and customers to deploy data-driven systems that promote human well-being, agency and equity to meet new and existing regulations and policies. Townsend serves on national committees and boards promoting trustworthy and responsible AI, combining his passion and knowledge with SAS’ more than four decades of AI and analytics expertise.

2 Comments

Back to Top