As we move into 2025, AI continues to transform industries in unprecedented ways, driving efficiency, innovation, and productivity. But with this rapid advancement come critical ethical questions. How can we ensure that AI systems protect the rights and well-being of individuals? Manufacturing and agriculture are two essential industries where answering
Tag: data ethics stories
Ever since generative AI burst onto the scene, it has sparked a whirlwind of ethical concerns. Unlike traditional AI, which typically analyzes and makes predictions based on existing data, GenAI creates entirely new content – videos, text, audio, code and more. This creative power introduces a new level of risk,
We know that building trust in technology is a big deal. It’s no longer enough for AI to just work – we need to understand how it works, what it is doing and whether it’s performing as expected. That’s where model cards come in. If you remember from our previous
Model cards have been around for a few years now and while their purpose is clear – to increase machine learning transparency and to create a way to communicate usage, ethics-informed evaluation, and limitations – they're still evolving. Many companies have tried their hand at creating their own version of
What sets the SAS Model Card apart from previous model cards is the use of descriptive visuals, to make model cards accessible to all personas involved in the analytics process, including data scientists, data engineers, MLOPs engineers, managers, executives, risk managers, business analytics, end-users, and any other stakeholder with access to the SAS Viya environment.
Who has time to be a nutritionist between work deadlines and swim practice? Not this working mom! But my tiny human needs her fuel, you know? This is why I’m thankful for nutrition labels. A quick scan at the grocery store tells me if that cereal is all sugar bombs
AI governance is an all-encompassing strategy that establishes oversight, ensures compliance and develops consistent operations and infrastructure within an organization. It also fosters a human-centric culture. This strategy includes specific governance domains such as data governance and model governance, necessary for a unified AI approach. Why AI governance matters The
So now you're ready to make decisions with your models. You’ve asked many questions along the way and should now understand what’s all at play. But how can you ensure these decisions are trustworthy and ethical? Transparency is crucial. Sharing the reasoning behind our choices in relationships, whether at home
Generative AI (GenAI) is booming. It’s not just a trend; it’s produced a seismic shift in how we approach innovation and technology. SAS Innovate 2024 has moved on from Las Vegas and is now on tour across the world. If you want a recap of what happened in Vegas or
Deploying AI insights isn't just about pushing buttons and hoping for the best. The deployment phase is a pivotal moment where technology and ethics meet. When transitioning AI models from development to real-world use, prioritizing trustworthiness remains important. It’s not just about algorithms; it’s about how AI impacts people and
They say trust is a delicate thing. It takes a long time to build trust. It’s easy to lose and hard to get back. Trust is built on consistent and ethical actions. Therefore, we must be intentional when creating AI models. It's crucial to ensure that trustworthiness is embedded
Trustworthy AI is dependent on a solid foundation of data. If you bake a cake with missing, expired or otherwise low-quality ingredients, it will result in a subpar dessert. The same holds for developing AI systems to handle large amounts of data. Data is at the heart of every AI
As organizations infuse trustworthy practices into the fabric of AI systems, remembering that trustworthiness should never be an afterthought is important. Pursuing trustworthy AI is not a distant destination but an ongoing journey that raises questions at every turn. For that, we have meticulously built an ethical and reliable AI
Black History Month seems like an opportune time to comment on the recent pullback of DEI initiatives, particularly in tech, as a reminder of a historical story. It’s a story of the perpetual dance between social progress and regression as America’s historically marginalized communities are concerned. However, the significance of
The National Institute of Standards and Technology (NIST) has released a set of standards and best practices within their AI Risk Management Framework for building responsible AI systems. NIST sits under the U.S. Department of Commerce and their mission is to promote innovation and industrial competitiveness. NIST offers a portfolio
I saw a fascinating Reddit thread titled: "What would you do if your son told you he’s dating an AI?" Here's the post verbatim: "My son (20M) just told my wife and I that he’s been in a relationship with a replika for the past few months. He claims that it’s
AI tools should, ideally, prioritize human well-being, agency and equity, steering clear of harmful consequences. Across various industries, AI is instrumental in solving many challenging problems, such as enhancing tumor assessments in cancer treatment or utilizing natural language processing in banking for customer-centric transformation. The application of AI is also
AI became the unofficial word of 2023 and the craze is likely to continue into 2024 as new creative applications and uses of AI emerge across industries and sectors. But before organizations invest too many resources into foundational AI models, leadership should ensure that the organization has a firm grasp
In 2024, we will witness the proliferation of synthetic data across industries. In 2023, companies experimented with foundational models, and this trend will continue. Organizations see it as an emerging force to reshape industries and change lives. However, the ethical implications can't be overlooked. Let’s explore some industries I think
I recently had two incredible opportunities: to visit the White House for a landmark executive order signing and to make remarks at a US Senate AI Insight Forum. The AI Insight Forum was part of a bipartisan Congressional effort to develop guardrails that ensure artificial intelligence is both transformative and
The relationship between trust and accountability is taking center stage in the global conversations around AI. Accountability and trust are two sides of the same coin. In a relationship – whether romantic, platonic or business, we trust each other to be honest and considerate. Trust is fueled by actions that showcase
In this era of technology dominated by AI and rapid advancements, trust has emerged as a critical pillar of our interconnected world. As Reggie Townsend, Vice President of the Data Ethics Practice at SAS, explains, we must understand that trust is essential for meaningful relationships and the functioning of civil
Most of us have experienced the annoyance of finding an important email in the spam folder of our inbox. If you check the spam folder regularly, you might get annoyed by the incorrect filtering, but at least you’ve probably avoided significant harm. But if you didn’t know to check spam,
Embracing AI is wonderful. From a practical business perspective, though, there are limits. This issue is broader than AI. However, I’ll constrain the conversation to that for now, given the attention AI is getting these days. Yes, some processes are undoubtedly good candidates for automation, but avoiding “technocentrism” is critical to
As a member of the SAS Data Ethics Practice, I was excited to collaborate with teams at the SAS Hackathon to learn more about their ideas for trustworthy AI. Artificial Intelligence has the potential to make a difference in the real world, and partnering with the hackathon teams was a
AI – just like humans – can carry biases. Unchecked bias can perpetuate power imbalances and marginalize vulnerable communities. Recognizing the potential for bias is one of the first steps toward responsible innovation. Doing so allows users to include diverse needs and perspectives in building inclusive and robust products. Through
As organizations embrace AI, they often handle large volumes of data that power AI systems. Handling data appropriately includes implementing adequate privacy policies and security measures to protect it. Doing so prevents accidental exposure and ensures ethical data use. AI technology often uses sensitive data for creating, training and utilizing models.
As AI rapidly advances over the next several years, I’ve been fortunate to have an active role in helping to guide a responsible path forward when it comes to technology’s impact on our daily lives. Currently, this role includes serving as Vice President for the SAS Data Ethics Practice, as an
Who is responsible for ensuring that new AI technologies are fair and ethical? Does that responsibility land on AI developers? On innovators? On CEOs? Or is the responsibility more widespread? At SAS, we believe that it is everyone’s duty to innovate responsibly with AI. We believe that adhering to trustworthy
I see the term resilience in a lot of business literature these days. Intuitively, it makes sense. After a pandemic, global supply chain disruptions and resulting economic fragility, executives understandably consider adaptability, durability and how best to operate with a strength of character – all attributes that define resilience. Many