I had the privilege of joining the Dean’s Speaker Series at UNC Kenan-Flagler Business School, hosted in partnership with the Kenan Institute of Private Enterprise. My sincere thanks to Dean Mary Margaret Frank, the Kenan Institute team and all who made this event possible. It was an honor to share space with so many thoughtful leaders, students and community members committed to shaping the future of responsible innovation.
We’re living through an extraordinary moment in history, one defined by rapid technological change and unprecedented concentration of power.
Just seven companies – Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia and Tesla – now account for nearly 40% of the total S&P 500 market capitalization. Their combined value rivals the GDPs of entire continents. This isn’t just a financial statistic; it’s a reflection of how deeply technology and especially AI, is shaping economies, policies and societies.
Amid this transformation, headlines often suggest that AI is driving a wave of corporate layoffs. The truth is more nuanced. While automation plays a role in workforce restructuring, most recent layoffs have been driven by economic pressures, market corrections and strategic pivots – not wholesale replacement by AI. That distinction matters because it underscores a critical point: the future of work isn’t predetermined by technology. It’s shaped by the choices we make today.
At the same time, we’re living in a cultural moment where technology is both exhilarating and unsettling. AI headlines dominate the news, not just for breakthroughs in health care or productivity, but for its darker uses. Deepfakes blur the line between truth and fabrication. Disinformation campaigns weaponize algorithms to sow mistrust and polarize communities. These realities remind us that innovation doesn’t exist in a vacuum; it reflects human choices, values and intentions. Against this backdrop, conversations about ethics and responsibility aren’t optional. They’re urgent.
Why does this moment matter?
This is not a time for passive observation. It’s a time for ownership. The decisions we make about how we design, deploy and govern AI will ripple across generations. We can’t afford to frame responsible innovation as a binary choice between progress and protection, innovation and regulation. Responsibility means holding both truths at once: embracing the freedom to innovate while honoring the obligation to safeguard human dignity and societal well-being.
Today, the AI conversation is global, prevalent and loud. The loudest voices tend to describe AI as either an existential threat or a utopian promise. I don’t subscribe to either extreme. Instead, I believe those with the greatest influence over its trajectory will form it according to their beliefs, and if the rest of us aren't cool with that, now is the time to speak up! That means our task is to navigate the real, complex present, with all its hopes, tragedies, and plurality, as it intersects with AI that is already reshaping economies, policies, and daily life, making our choices matter more than ever.
I'll share a few takeaways from our discussion.
1. AI as a mirror and a call to responsibility
AI, like the first mirrors in human history, reflects who we are: our wisdom and our biases, our hopes and our blind spots. The question is not whether we will use AI, but how we will ensure it helps us build a better world. Responsible innovation starts with responsible innovators: people who act with courage, empathy and a commitment to human dignity.
2. Shared stewardship beyond regulation
Governments, industries and individuals all have a role to play. Regulation is necessary, but not sufficient. True oversight requires ethical frameworks, diverse teams and a willingness to ask hard questions about data, impact and accountability. At SAS, we design with AI ethics, governance and social impact in mind to help organizations lead with integrity.
3. Trust is built on shared risk and reward
Trust in AI is essential to realize its long-term benefits. To achieve that, we must ensure that both the risks and rewards of innovation are distributed equitably. 46% of companies tell us they don’t have alignment between trust in AI and trustworthy AI. When diverse voices have a seat at the table and when communities see themselves reflected in the technology, trust can flourish.
Responsibility means holding both truths at once: embracing the freedom to innovate while honoring the obligation to safeguard human dignity and societal well-being.
4. Everyone is an innovator
Innovation isn’t just for technologists. The most powerful advances happen when experts from all domains – health care, finance, education, the arts – bring their knowledge and values to the conversation. I encourage everyone to approach AI with curiosity, critical thinking and a sense of shared responsibility.
5. The future is a choice
When future generations look into the “mirror” of AI, what will they see? My hope is that they’ll see technology that amplifies our wisdom, distributes opportunity and reflects our highest values. That outcome isn’t inevitable; it’s a choice we make together, starting now.
Thank you again to the Kenan Institute, Dean Frank and the entire UNC community for your leadership and partnership. Let’s keep the conversation going, and let’s ensure that, together, we build a future where responsible innovation empowers everyone to thrive.
1 Comment
We thinkers and observers, and those in leadership positions supported by large corporations, must react with innovation, accountability, and compassion—actively drawing on our experience in technology and the corporate world—to confront systemic bias head-on and build AI that elevates human dignity, opportunity, and trust.