The complete glossary for marketers
A/B testing is the method of creating 2 or more versions of creative or content on a website, email, or app to compare how your audience responds to each version. It’s a quick way to measure the impact of changes to your user experience. A/B testing uses data and statistics to validate new design changes and improve conversion rates. The goal of creating an A/B test is to learn.
Originally, the term A/B testing referred to the practice of testing two versions of a website page, and such tests were largely carried out by an IT team. Since then, A/B testing has become the domain of many marketers, and the introduction of multivariate testing and A/B/n testing have allowed marketers to test more than two variations. Due to its popularity, the term A/B Test is sometimes used to also refer to multivariate testing and A/B/n testing.
A/B testing is often called an “experiment” because like in science experiments there needs to be a control group. The control group only experiences the default content, and does not receive any of the new experimental content. This fact is important if you are to properly measure and analyze the success of an A/B test.
In an A/B test, you modify a webpage or app screen to create a second version of the same page. This modification can be as simple as a single headline or a button — or the modification can be a complete redesign of the page. Then, half of your traffic is shown the original version of the page (known as the control) and half are shown the modified version of the page (the variation).
As website visitors are served either the control or variation experience, their engagement with each experience is measured and collected in an analytics dashboard and analyzed through a statistical engine. You can then determine whether changing the experience had a positive, negative, or no effect on visitor behavior. By measuring the impact that changes have on a specified metrics, you can ensure that every change produces positive results.
The goal of A/B testing is to find the best performing content for a specific goal (or goals). Choosing the goal of your AB test should be part of your test development process.
Most marketers focus on improving one of a few different key performance indicators (KPIs) that are a part of their Conversation Rate Optimization priorities, such as:
You can test different images, form fields, button colors— anything you can dream up.
Choosing what to test should be based on which metrics you are trying to improve. Once you have determined the goal for your test and you have gathered other analytical data about your traffic, the ideas present themselves much faster. For example, if you’re trying to increase average time on site, you may want to test replacing an image with a video. If you want to increase add-to-cart rate, you may want to test the size, shape, color, or content of your product page call-to-action button.
A/B testing allows you to make careful changes to your user experiences while collecting data on the results. This allows you to construct hypotheses, and to learn better why certain elements of your experiences impact user behavior. Your opinion about the best experience for a given goal can be proven wrong through an A/B test — but this is still a good thing, because you can use that information to learn and improve. More than just answering a one-off question or settling a disagreement, A/B testing can be used consistently to continually improve a given experience, improving a single goal, like conversion rate, over time.
Testing one change at a time help you pinpoint which changes had an effect on your visitors’ behavior, and which ones did not. When you constantly and consistently improve your customer experience, your website visitors are more apt to take the on-site actions you want. A/B testing with Monetate Test & Segment™ has assisted hundreds of companies with their testing programs, and testing success.