Like many b2b and b2c organizations, our corporate website, www.sas.com, is a critical channel for how people learn about SAS and interact with us in the digital space. We have millions of visitors from around the world on a monthly basis looking to learn more about who we are, what we do, the products and solutions we provide and the technology and innovation we deliver.
Good web experiences matters for your users, for your brand
The experiences you create on the web impact your brand, the customer experience, lead generation, acquisition, retention and more. One of the strategic ways we ensure we’re creating and building the best digital web experiences possible that serve our site visitors and our business is through experimentation – a/b testing, multi-variant testing and content targeting and personalization. The platform we use to do this is SAS Customer Intelligence 360, and more than all the great features and functions it provides us, the most important thing it gives us is the ability to take the guesswork out of the work that we do. It gives us the ability to use meaningful data and insights by eliminating biases and letting the data determine the best path forward in connecting our visitors to the things that matter for them on sas.com.
Good experimentation matters for users and your brand
We experiment so we can make things better. Simple I know, but true. Who wants to spend time, energy and money on creating bad experiences? Or, maybe worse, being unaware that we are creating bad experiences. That’s bad for end users and bad for the business.
So whether you’re just getting started with a testing and experimentation program, or you're well on your way, here are three key factors we apply at SAS when using SAS CI360 for creating great digital and web experiments:
1. Know what you’re testing for
This may seem obvious, but it can be easy to start running experiments simply for the sake of running an experiment and not first having clearly defined what it is you’re testing for and why. At SAS, we focus around three primary outputs of the experiments we run:
- We test to win. These are tests we run with the specific goal of improving something – conversion rate, click-through rate, engagement rate, etc. The output we care about is seeing a percentage lift in the KPI we’re measuring.
- We test to not lose. Sometimes, we know we want to change something, but before we do, we want to make sure it won’t have broad negative impact on user experience or KPIs. For example, maybe we simply want to update the background color on a registration form, but that form is used globally on hundreds of web pages. Well, before we make that change, we want to test, test and re-test first to confirm that we won’t inadvertently impact registration conversions with that change. We want to take the guesswork out of our approach.
- We test to simply learn. Sometimes, we don’t really have a goal or output we’re seeking other than to observe and learn. The more we can observe and learn from actual user behaviors through experimentation, the more knowledgeable we become around what users actually do, things that work better, or not, than other things, and how we can use those observations and learnings and apply them to the work at hand, and future projects.
2. Small things matter
Great things are done by a series of small things brought together; or, so said Vincent Van Gogh. He was right!
Other times, great and impactful testing is about testing big things – a completely new web design, or creative for a brand new global campaign to be used across a multitude of channels. Often though, the most useful and impactful experimentation is around the smaller things. Button colors, button copy, calls to actions, headlines, navigational titles or how we phrase pricing and discounts and limited-time offers. Don't overlook making the most out of the small things - they'll add up in big ways.
3. Be curious
In building a successful experimentation program or even just a success experiment, it’s important to have the ability to step back from something and simply ask questions. “I wonder what would happen if…?” I’d be curious to see if we changed X to Y what might happen…” “It would be interesting to see if users preferred…” Really good testing is always driven by removing the assumptions that have been put into something and leveraging data and real user insights to point the way.