October 21, 2014
One of the most popular topics surrounding A/B testing is generating new ideas.
It’s one of the most frequent requests we get from our customers. And if you take a look at Google search results, it’s one of the most written about topics on the subject.
Ideas are great—no arguing that—but what are you planning to do once you have a whole bunch of ideas? The best way to create an efficient, impactful testing program is to learn how to prioritize A/B tests.
The reasons for prioritizing A/B tests are many, but the two most important are pretty darn persuasive. Consider:
So, how do you do prioritize A/B tests?
At Monetate, we’ve been using a testing prioritization template to guide our own testing program. The process is straightforward: write down your ideas on paper, follow a straightforward scoring system and start running your tests.
Though getting the details of your test ideas down on paper is important, the key here is rating three elements:
A test on a key landing page may be relatively easy for you to deploy, but that page may be “owned” by someone else in your organization. Getting approval could prove difficult.
Alternatively, you may see a high-impact test opportunity on a page that doesn’t get much “love” from your organization. You might be able to get approval on that test pretty quickly, but the opportunity requires a more sophisticated experiment, which means support from your IT department. When evaluating difficulty, approval, execution, and timing are all important factors.
One-off creative work will always require more time.
If you’re constantly designing new creative for your tests, that’s a sure-fire way to slow down your time to market. Consider, instead, whether your creative team can build out templates for you to use or whether you can build out a single creative brief for your creative team. That way, you’ll make one ask of your creative team for a series of tests, which will allow the entire team to coordinate creative requirements, better plan projects, and save time.
When we say “impact,” we mean business impact.
The easiest way for an ecommerce site to gauge that is by measuring your page performance in one of three ways: conversion rate, average order value, or revenue per session. (Of course, not every page’s success will be measured by these KPIs, but they’re a solid place to start.)
Once you have your preferred metric, you’ll want to also consider the amount of traffic your page receives and how much it costs you to direct that traffic to that page.
For each of these questions, you’ll want to assign a score of 1 to 5, with “1” meaning “a little” and “5” meaning “a lot.”
Once you have those numbers, you’ll multiply them together. The higher the score, the higher on your priority list the test should move. The lower the score, the lower the priority of the test.
Tim Ash recently wrote on our blog about the value Google Analytics’ Advanced Segments feature can bring to ecommerce companies. It’s equally helpful when digging for the data that will help you prioritize your A/B tests.
You can use Google Analytics to find nearly everything you need to measure your potential impact: how much traffic each of your pages receive, what your exit rates and bounce rates are, where your funnel leaks occur, and plenty of other data.
When you begin targeting your tests based on relevance, you’ll also be able to use Google Analytics to segment your visitors based on their behaviors (again, Tim’s blog post), as well as referring traffic sources.
Learning how to prioritize A/B tests will help you improve the efficiency of your testing program. You’ll spend far less time debating what test to run when and you’ll be able to plan for the resources you need to make your program run.
Sounds too good to be true? It’s not.
To-do list image courtesy of Shutterstock.