Trust has always shaped how we adopt new technology, and AI is no exception. While trust remains a real and unresolved challenge, today’s adoption gap also reflects a lack of AI literacy.
As AI capabilities advance faster than most people can reasonably keep up, individuals are increasingly interacting with systems they don’t fully understand, making AI literacy an essential skill for everyday use.
To explore what meaningful AI literacy looks like in practice, we spoke with two experts who work directly with AI every day.
We asked Kristi Boyd, Senior Trustworthy AI Specialist in SAS' Data Ethics Practice, and Kimberly Nevala, Strategic Advisor and host of Pondering AI, to share their perspectives on how people understand, interpret and work alongside AI systems.
What does it mean to be AI literate?
Kristi: When it comes to AI literacy, we need a baseline level of understanding. We should know enough about what AI is and how it works to operate it well, use it responsibly and recognize when something raises a concern.
For example, I am not an electrician. I can change a light bulb and I understand enough that I can connect the right things. I’m not going to ever get a job working as an electrician – I just know enough about safety to use electricity in a way that benefits me.
Kimberly: To be AI literate is to understand how an AI system works – not merely at the surface level, but enough to make a reasoned judgment about if, when and how to apply or use it in a given context. AI literacy isn’t a single check-the-box exercise like ‘explain how inputs get turned into outputs.’
There are different levels and types of AI literacy, just as there are different levels of reading comprehension, before even accounting for different languages.
AI literacy exists on a spectrum. A developer may be AI-fluent and understand how a system works end to end, while a business leader needs enough literacy to avoid causing harm. The same way you trust your GPS to route you efficiently, but still rely on your own judgment when it tells you to “turn right” into a lake.
The Data and AI Impact Report reveals how gaps in understanding can influence behavior. The findings highlight where people think they know how AI works – and where that intuition can be misleading.
One of the clearest examples: Despite its trendiness and novelty, generative AI is trusted more than traditional machine learning, which is mathematically explained. That 200% margin isn’t just a trust issue – it’s a literacy issue.
With that in mind, we asked Kristi and Kimberly to help interpret these findings through the lens of how people understand AI, not just how they trust it.
What is the difference between trust and trustworthiness in AI?
Kristi: There are two parties: the AI deployer and the AI system. Trust and trustworthiness are differing perspectives that those groups can have. Trust is the degree to which someone bestows confidence or positive assurance in AI. Trustworthiness is whether that AI system is being developed and run in a way that facilitates trust.
Kimberly: Trust is the belief that something or someone will behave or perform as expected. Trustworthiness is the extent to which they will or can behave accordingly.
When my clever nephew swears he brushed his teeth, I have a lower – better calibrated – level of trust than his uncle. I trust his toothbrush will encounter toothpaste, but not necessarily his teeth. This matters because it changes our respective behavior and the level of oversight we bring to the activity, determining when his teeth get brushed and when they don’t.
Why do people over‑trust ‘humanlike’ generative AI?
Kristi: We trust things that are like us or that we can understand. It’s similar to why, statistically, cars are more dangerous than airplanes, but I can’t understand how a plane suspends itself in midair while going hundreds of miles an hour. The same idea applies to AI.
GenAI systems interact with us using humanlike conversation, and we subconsciously apply the same human principles we give to another person – ‘innocent until proven guilty.’
Kimberly: Words are seductive. Confidence is seductive. GenAI systems (of the language varietal) are confident confabulators, and confident speakers or writers have always been persuasive. We often associate language fluency with intelligence. So systems that generate language can be profoundly disorienting.”
Humanlike responses may feel intuitive, but literacy requires recognizing when those cues are misleading.
How do we counter this?
Kristi: We should question how much we need AI systems to interact with us like a human. How necessary are the jokes? Transparency and explainability matter because we don’t want to overcomplicate the language or the understanding.
Kimberly: Much of the disorientation is by design, and it’s our design choices – enforced with meaningful regulation and training – that can help counter this effect. To an extent.
Both experts point to the ability to question, interpret and contextualize AI behavior. These skills help users rely less on what feels right and more on what they genuinely understand about the system.
What’s the biggest misconception about explainability? And how do you correct it?
Kristi: AI is a broad term that encompasses many technologies, and one of the biggest misconceptions is that explainability solves the problems. It does not. It explains how a decision was made, which is important, but it won’t solve issues of bias or fairness.
As AI specialists, we’re in a privileged position where our job is to talk and think about AI. For a lot of the population, AI is simply a tool contributing to their lives.”
Kimberly: The biggest misconception is that if an output is explainable, the model is correct or fit for purpose. Explainability doesn’t determine whether the factors at play are appropriate for the decision – practically, morally or otherwise. Those determinations require human discernment and validation.
The corrective action is to use explainability techniques as an input to decision‑making and design, not as the justification for it.
Continue the conversation on Pondering AI
Want to go deeper into how people interpret and interact with AI systems? The Pondering AI podcast, hosted by Kimberly Nevala, explores the ideas shaping how we understand, govern and work with AI today.
Tell us one behavior you want every employee to adopt when using GenAI at work.
Kristi: Do a second draft. If you’re using Copilot, great – but read it. Do a second draft before you send it to anyone else.”
Kimberly: At work or at home, we should all practice healthy skepticism. Both when deciding if GenAI is the right tool for the job and when determining how much we can rely on (or dare I say trust?!) the outputs.”
What’s one myth about AI literacy you’d retire tomorrow?
Kristi: A myth I’d retire is that AI literacy only applies to certain people. That’s not true. AI literacy applies to everyone. You need to know enough to be safe and to benefit.”
Kimberly: That AI literacy is all you need. AI literacy is one step toward responsible AI use, but it’s not the only one. Just as phonics is a step toward reading, but doesn’t guarantee comprehension.”
If there’s one thread running through every question, it’s that AI literacy enables better, safer and more intentional AI use. When we understand how a system works, recognize its patterns and apply our own judgment, we shift from simply trusting AI to confidently navigating it.
AI literacy is what turns uncertainty into informed judgment and the next step is to intentionally build it.
Ways you can develop your skills
Start with research
Explore the Data and AI Impact Report to understand what organizations are experiencing with trust, governance and human oversight. The findings offer a clear baseline for leaders who want to move from experimentation to responsible, scalable AI use.
Build practical literacy skills
Strengthening AI literacy doesn’t mean becoming a data scientist. It means understanding how AI systems work, where they can fail and what questions to ask.
This curated set of AI literacy resources outlines learning paths focused on technical foundations, governance awareness and ethical judgment.
Explore AI literacy resources →
Anchor literacy in responsibility
AI literacy is most effective when paired with clear principles. SAS’ Responsible Innovation framework outlines how human-centricity, transparency and accountability guide AI design and use, helping organizations apply judgment and not just automation across the AI life cycle.