SAS Voices
News and views from the people who make SAS a great place to work
As we reach the 25th playing of SAS Championship, we decided to take a more recent lens by revisiting the last five years of play and performance data to explore how shot-by-shot analysis can reveal how each hole offers opportunity or trouble. From bogeys and bunkers to eagles and extraordinary
Leadership doesn’t start with a title; it starts with action. In statistical programming, that might mean stepping up to explain a complex data set in a cross-functional meeting, suggesting a new way to visualize results or taking ownership of a project that goes beyond coding. These are the moments where
Food assistance programs like SNAP are lifelines for millions of households. Yet, ensuring their accuracy is an ongoing challenge for state agencies managing them. Even small errors in eligibility decisions can quickly add up, costing states billions, straining resources and undermining trust in the program. The question isn’t whether these
AI systems have problems. They don’t always work right. They hallucinate, lie, or simply forget the all-important closing bracket in a JSON payload. AI systems need accountability, safeguards and oversight. But what should that look like? Human-in-the-loop (HITL) and its companion human-on-the-loop are in a nascent stage. These strategies, designed
Generative AI has stormed into the enterprise toolkit. According to the IDC Data and AI Impact Report: The Trust Imperative, commissioned by SAS, eight in ten organizations are already using it. It’s become a social phenomenon, too. Many have used GenAI tools like Copilot and ChatGPT, talked about their prompts
I've seen firsthand how companies often treat AI governance as a necessary evil – something bolted on after innovation, like installing smoke detectors in a finished building. As both a data scientist building models and an advocate pushing for responsible AI practices, I experienced this dynamic daily. The companies winning