For Business

How to make A/B testing work for your business

Elevate your marketing strategy with A/B testing insights that can help improve sales.

AB_testing_blog.png

As an ecommerce business owner, you’re probably consistently confronted with the question of how to convert more shoppers on your site. The variety of user experiences you need to consider to drive shopper actions on your site can run from the deceptively simple (such as selecting a font style or CTA button color) to the more complex (such as the selection a product image or page layout). In the digital age, if you’re relying on gut instinct alone, you might be leaving revenue on the table.

When it comes to delivering a compelling user experience, you’re likely asking questions such as: Which subject line gets more customers to open a marketing email? Among those who click through to a particular product, which featured image results in more conversions? Does changing the language or headline in a product description get more—or less—shoppers to buy? How does the design of product page affect conversions?

With the help of A/B testing, you can start answering these questions. A/B testing can help you:

  • Understand your shopper needs.
  • Discover ways to improve how shoppers explore your site.
  • Quickly adapt to changes in shopper behaviors.
  • Increase conversion rates.
  • Avoid making mistakes that could inhibit the shopper experience.

Actionable data can be profoundly vital to your business, and A/B testing is one of the simplest ways to gather insights to make informed decisions that can fuel business growth.

“A/B testing is certainly a time investment,” says Adrian Bell, Amazon’s principal product manager overseeing A/B testing at Buy with Prime. “But even if you are short of time and resources, we recommend making the time for experimentation because it can be very valuable for your business.”

What is A/B testing?

At its most basic level, A/B testing, or split testing, is a randomized controlled experiment that tests two versions of a user experience against each other to reveal which is more successful.

The “A” side of the experiment is the control group, which is usually the status quo—for example, the image you’re currently using to sell a specific toaster on your homepage. The “B” side is the variation, known as the treatment group.

As an example: say you have an existing product page for a toaster with a description, headline, and image on your site (the “A” side), and you wanted to try swapping in an image of the toaster with a red bow on top (the “B” side) to determine if this change impacts conversions. If you keep all other things equal, you can correlate this image change with the resulting increase or decrease in conversions.

If the toaster visual with the bow turns out to drive 5% more conversions, then you can change that product page to be your “A” side. With this insight, you can now identify other “B” side tweaks that you can test to get that toaster popping even more.

You can use A/B testing on a variety of elements on your site, including testing new feature launches and ad copy, site design and CTAs, subtle variations in color and fonts, and examining the way shoppers move from product pages to checkout.

“Every time you make a change to your site, it makes sense to run an A/B test,” Bell explains. “In an ideal world, everything should be tested. Most importantly, you should pay attention to where purchase decisions are being made—where a shopper adds an item to their cart. Your results can come back negative or positive, but either way, you can avoid making a mistake—and then calculate how much you saved by not making a bad decision.”

Why A/B testing works

Particularly for online businesses, the power of A/B testing is well known. One classic example cited in the Harvard Business Review involves an engineer who had a simple, low-cost idea for changing the way ad headlines were displayed on the search engine. The employee’s idea had been lost in the shuffle for months, until a coder set up a basic A/B test to see how the idea performed.

“Within hours the new headline variation was producing abnormally high revenue,” wrote data scientist Ron Kohavi, then in charge of the experiment and one of the leading experts in the field. “An analysis showed that the change had increased revenue by an astonishing 12%—which on an annual basis would come to more than $100 million in the United States alone—without hurting key user-experience metrics.”

As Kohavi explained, as long as your sample size is large enough for your results to be statistically significant, your experiment is well-designed, and your metrics for success are well-defined, the impact of A/B testing and its more complex sibling, multivariate testing, can’t be overestimated.

“The returns you reap—in cost savings, new revenue, and improved user experience—can be huge,” Kohavi noted. “If you want to gain a competitive advantage, your firm should build an experimentation capability and master the science of conducting online tests.”

Many large brands conduct hundreds of thousands of online controlled experiments each year. But, not every business has the scale of traffic and resources to constantly run tests to gather learnings about user behavior.

“Every online business is different, so a general design or user experience recommendation might work for one merchant but not for you,” Bell says. “Practicing regular A/B testing – even if at a small scale – is the way to find out what works specifically for your brand, provides insights about your shoppers, and can have a positive influence on your business.”

Tips for conducting A/B testing on your site

Below are seven tips we got from Bell to help you frame up your own A/B testing.

  1. Launching A/B testing can be easy. There are many resources available, including code-free tools that are easier to start with.
  2. Start by identifying your goal and choosing a relevant metric. Ask yourself ‘what do I care about?’ For many merchants the answer is likely to be conversions and revenue.
  3. Stick with the same tool and make sure to limit the number of changes to your site while testing.
  4. Run no more than two or three tests at the same time for up to four weeks (choose the number of days divisible by seven so you can sort data by every day of the week).
  5. Account for possible false signals and biases from external factors, such as a natural drop in sales due to a seasonal change or an uptick during peak shopping days like Black Friday.
  6. Plan to compare your testing result data year-over-year and keep in mind your high and low-volume sales days.
  7. After you get in the habit of testing, make sure to run an A/B test each time you implement a change on your site.

Your results depend on several factors, including the volume of traffic to your site, shopper expectations, noticeability of a change (changing a color is more visible than changing a font, for example), and the number of repeat purchases. Read how data scientists explain repeat customers.

Advanced A/B testing is the realm of data science, but don’t let that scare you. Out of the gate, we recommend that you educate yourself on Bayesian analysis (results are based on enough data to fuel a statistical probability model), and start small. As you learn and grow, you can dive into more complex A/B testing tactics.

“We use the Bayesian statistical model to calculate the probability of a positive impact of the changes we make,” Bell says. “This means, A/B testing results can provide you with a probability that one option is better than the other with a certain confidence level. Data won’t give you a ‘yes’ or a ‘no’ answer, but you can be confident that you aren’t making one mistake after another.”

Learn more about Buy with Prime.

Analytics
Kelby Johnson