Skip to content
Supercharge Your Pipeline
AB Testing Isn’t Dead

A/B Testing Isn’t Dead—It’s Limited—and Here’s Why

April 18, 2018


By Kristen Patel

Optimizely, a leading A/B testing platform, officially killed its free Starter plan just a few weeks ago. The Starter plan, a basic A/B testing platform that led the company to its status as market leader, was designed as an easy way for customers to get started with their journey of experimentation. While the company still offers its Optimizely X plan, a solution geared toward enterprise customers, it sent the marketing world into a tizzy by eliminating the Starter plan.

In fact, some companies even went so far as to proclaim that A/B website testing was dead!

 

Is It, Actually?

The short answer: No. The long answer: Still no. While I find myself relying on A/B tests frequently as a conversion rate optimization (CRO) strategist here at SmartBug, I’m thankful that A/B testing—and resources surrounding these sorts of experiments—are so readily available. Since the initial boom of A/B testing’s popularity in the early 2000s, marketers have learned a great deal. We know when A/B tests are (and aren’t) beneficial, which factors lead to successful A/B experiments, and which previously commonly accepted practices can skew the results completely.

So, as you make CRO a strategy for your organization’s website, what do you need to know?

 

Is an A/B Test the Right Test to Run?

Not long ago, Optimizely—once again cementing its role as a market leader in the world of CRO—studied and reported on the factors that the world’s best testing companies shared. And one of them was testing multiple variants simultaneously.

Now, you might remember that in a true A/B test, only one variable is tested at a time. While the definition of an A/B test has evolved since then to commonly incorporate the testing of 2–4 variables, it’s still a small-scale test.

Find out how to develop a quality, lead-generating website. Download: The Keys  to Website UX and Usability

Think of how many different components are on a standard landing page. The header, the copy, the imagery, the form fields, the presence (or lack) of social proof, navigation (or not) ... . I could go on, but I’ll spare you. My point is simple: In running an A/B test on your landing page, you would most likely have to prioritize variables in order to run multiple A/B tests sequentially, and doing so would take a long time. Additionally, you wouldn’t get a feel for the overall success of interacting components and variables on the page. Just because Header 1 performed better in your first test and Subhead 2 performed better in your second test doesn’t mean that Header 1 and Subhead 2 together on a page will set your campaign up for success.

Now, I’m not trying to invalidate the true A/B test—if you have a specific variable you’d like to test, enough page visitors to demonstrate the statistical significance, and enough time to wait for a truly conclusive result. But this isn’t always the case.

Regardless of whether you’ll be running a true A/B test or a multivariate test, you will most likely find yourself presented with the same challenges. Make sure that you’re avoiding these common pitfalls and that you’re setting your experiments up for true and conclusive results:

 

1. Don’t Test Too Many Variations

I feel like this is something we’ve all done, right? It can be just so tempting to test three or even four variations at one time. And that may seem like it’s a more comprehensive test, one that will present us with more insights and better knowledge moving forward.

While that is possible, what is certain is that with each variation you include, the more traffic you’ll need, and the longer your test will take before you can get trustworthy and conclusive results. What is probable is that your experiment is not set up correctly to take these multiple variations into account, and thus you will end up with a false positive. This is called the Multiple Comparisons Problem, and while it can be combated by using tools that adjust for the number of variations you’ll be testing, many of the available CRO tools do not. Be cognizant of this limitation as you decide what, and how many, factors you’ll be testing.

 

2. Don’t End Your Experiment Early

While this may seem like a “duh” sort of warning, you’d be surprised how often marketers stop their experiments as soon as statistical significance is hit. While many tools allow users to stop a test as soon as this occurs, doing so can actually produce misleading results.

Just because results look promising at one stop along the way doesn’t mean that those results will be the final results. Think of the last Super Bowl: If you were just glancing at the game hoping to catch a glimpse of the commercials, it looked like the Patriots had it in the bag. And per the betting odds, they did. But, by waiting for the game to end (or if it were a marketing experiment, the sample size to be reached), you gather accurate results—which aren’t necessarily the results you expect to see.

 

3. Don’t Test the Small Things

I mean, you can, but why would you? Personally, I blame the go-to example for A/B testing (Which leads to more clicks: A red button? Or a blue button?) for this having become a common practice. But even though we as marketers can test individual variables like the button color, chances are that this variable won’t have the greatest impact.

And as you’re most likely already going to be waiting weeks (if not months!) for results of your A/B or multivariate tests, I’d suggest you go big. In fact, why not go radical, and see if a large-scale update can make an impact on your website metrics. Now, while these radical tests can lead to greater changes from a metric perspective, it’s important to remember that you won’t necessarily be able to pinpoint exactly why something changed. While I admit this can be frustrating from a user behavior standpoint, I find it’s worth it.

 

So Remember ...

A lot of marketers decide to test variables on the fly. Don’t do this. Take the time to ensure that the experiments you’re planning support your website and organizational goals. Focus on tests that provide true positives. Ensure that you can reach the sample size needed to demonstrate success. If you can survive without the smaller details, go radical. And don’t forget to keep these best practices in mind as you continue to run experiments.

 

unsplash-logoChristian Stahl
The-Keys-to-Website-UX-and-Usability-cover

Learn about web optimization with:

The Keys to Website UX and Usability

Check It Out
Topics: Marketing Strategy, User Experience